The AI Frontier and What’s Next for Business

As AI becomes increasingly integrated into businesses and society as a whole, it is essential to establish governance that ensures its ethical use, writes Adriana Hoyos.

Listen to this Article

Artificial Intelligence is clearly revolutionizing the ways businesses operate. At its core, AI is the development of machines and systems that can perform tasks traditionally requiring human intelligence. These tasks range from learning and reasoning to problem-solving and natural language understanding and the applications are already ubiquitous – customer service chatbots engage with customers around the clock while predictive analytics optimize marketing strategies streamline supply chain management.

For example, Amazon’s recommendation system, powered by generative AI, personalizes the shopping experience by suggesting products based on users’ browsing and purchase history. In finance, companies like JPMorgan Chase use AI algorithms to detect fraudulent transactions, improving both security and customer experience. The public sector is likewise harnessing AI’s potential. The U.S. Department of Defense, for instance, invests in AI technologies that enhance national security through advanced surveillance and data analysis.

Not all AI is created equal, however. The current state of AI is what is known as Artificial Narrow Intelligence (ANI) and it represents a significant but limited step toward more advanced forms of artificial intelligence. Basically, we are still in the early stages. The AI systems that we interact with on a daily basis – from virtual assistants like Siri and Alexa to recommendation engines – fall under this ANI category. It’s also referred to as weak or narrow AI and it is designed and trained to perform a specific task, just one thing at a time, and is unable to handle tasks outside of its programmed scope.

Yet despite these seeming limitations, ANI has had a huge transformative impact across industries. In manufacturing, Siemens uses AI-powered predictive maintenance systems to monitor equipment performance, reducing downtime and improving operational efficiency. In healthcare, AI-driven diagnostic tools help doctors analyze medical images to identify conditions such as cancer or cardiovascular diseases more accurately and quickly, and are being used in hospitals including Mass General Brigham in Boston.

Generative AI is a subset of ANI and is one of the more exciting advancements in the field. Unlike traditional rule-based systems, generative models like OpenAI’s GPT-4o or DALL·E 3 use machine learning to create original content – text, image, video – based on the data they’ve been trained on. For example, businesses from every industry are using generative AI to create marketing copy, generate product designs, or simulate customer interactions.

LLMs, a type of generative AI, can significantly boost productivity by automating routine tasks, such as drafting reports or generating customer responses. This automation can allow teams to focus on more strategic work. LLMs also enhance efficiency by quickly processing vast amounts of data, providing insights and solutions in a fraction of the time it would take humans. While there are numerous LLMs in development, some of the leading models include OpenAI’s GPT-4o, Google’s Gemini, Anthropic’s Claude, Grok, and Meta’s LLaMA 2. These models, among many others, are pushing the boundaries of AI-driven content generation and analysis capabilities, including Cohere Command R, Mistral 7B, X.AI, Grok, Amazon Titan and Falcon 40B, MosaicML MPT, Microsoft Orca and Copilot, StableLM, LYMSYS Vicuna33B, Stanford Alpaca, LaMDA, Flan UL2, PaLM, and Deep Mind Gato.

Generative AI has found particularly innovative applications in the media and entertainment industry. Companies like Netflix (with its Machine Learning Research division) and Disney’s GetImg are exploring AI-generated content to streamline production processes. However, the rise of generative AI also presents ethical questions, particularly regarding copyright, authenticity, and the potential for deepfakes and misinformation. These issues, often stemming from AI “hallucinations,” underscore the need for robust policies, regulation, and governance.

While ANI is highly specialized and efficient, it is not capable of human-like reasoning or decision-making. It excels in performing routine, repetitive tasks but lacks the ability to adapt beyond its programming. This limitation has led researchers and companies to pursue the next frontier: Artificial General Intelligence (AGI), which is often referred to as strong AI and represents a more ambitious goal in AI development. Unlike ANI, AGI aims to replicate human-like abilities across a wide range of tasks. This is indeed an ambitious goal in that AGI could think abstractly, adapt to new situations, and solve problems without being pre-programmed for a specific task.

While AGI remains (supposedly) theoretical at this point, it is the subject of intense research and development. Google DeepMind’s AlphaGo, which defeated a human champion in the game of Go, is an example of achievement – however, it is important to note that AlphaGo operates within the narrow confines of game-playing.

The potential implications of AGI on society and industry are profound. In healthcare, AGI could revolutionize diagnosis and treatment, potentially addressing global healthcare challenges with unprecedented efficiency and accuracy. In finance, AGI systems could manage entire markets, making investment decisions and ensuring stability beyond even human capacity, let alone that of narrow AI.

AI is far more than a mere tool; it is a transformative force reshaping the future of business, government, and society.

However, with these advances also come challenges. As we approach the reality of machines capable of performing complex human-like tasks, there is a need for governance frameworks and regulatory structures to ensure that AGI is used responsibly and for the benefit of society.

Artificial Super Intelligence (ASI), often referred to as Technological Singularity, represents the possibility of AI surpassing human intelligence in all domains. This includes not only analytical capabilities but also creativity, decision-making, problem-solving, and emotional intelligence. While ASI remains speculative, its potential implications are both exciting and deeply concerning. An ASI would be able to outperform humans in every intellectual endeavor, from scientific research to policymaking, creating opportunities for rapid progress while also posing existential risks to humanity.

Prominent figures including Elon Musk, Yuval Harari, Geoffrey Hinton, and the late Stephen Hawking have voiced concerns about how ASI’s unchecked development could lead to unintended consequences. Their warnings center on the idea of creating machines that are not only more intelligent than humans but also potentially autonomous. This scenario raises questions about control, governance, and the ethical use of such advanced AI systems.

Indeed, the rapid pace of AI development has brought the need for thoughtful regulation and policy frameworks. As AI becomes increasingly integrated into business operations, government functions, and society at large, it is crucial to establish guidelines ensuring its responsible and ethical use. Current AI governance already encompasses various regulations addressing liability, design, copyright, privacy, anti-discrimination, and product safety, with AI-specific regulatory efforts gaining momentum around the world.

A significant milestone in AI regulation came this year with the European Union’s passage of the long-anticipated EU AI Act, which, like the General Data Protection Regulation (GDPR), is expected to shape global legislation. The United States, while lacking comprehensive federal AI legislation, has taken steps towards specifically regulating AI development or restricting its use. However, certain federal laws, like the National AI Initiative Act of 2020 aim to advance AI research and development and saw the establishment of the National Artificial Intelligence Initiative Office, which oversees the implementation of the US national AI strategy. AI was also mentioned in legislative proceedings twice as often in 2023 compared to 2022, with some regulations, such as China’s Interim Administrative Measures for Generative Artificial Intelligence Services measures, focusing on generative AI.

International collaboration on AI standards is also increasing, with organizations like the OECD, NIST, UN, ISO, and the G7 driving initiatives. Additionally, new AI safety institutes have emerged in 2024 across the US, UK, Singapore, and Japan. This expansion of regulations and standards is expected to continue, with international agreements on standards playing a critical role in managing AI risks while fostering innovation.

Understanding the different stages of AI development – from ANI to the potential of AGI and ASI – is essential for leaders making strategic and informed decisions about AI integration. While ANI is transforming industries, the possibilities presented by AGI and ASI could redefine how businesses and governments operate.

Effective AI self-governance will require both organizational and technical controls. Many organizations are adopting self-governance to align with their values and build credibility, often exceeding regulatory requirements. Voluntary frameworks like the UK’s Inspect AI safety platform, the US NIST’s AI Risk Management Framework, or Singapore’s AI Verify offer valuable guidance in this endeavor. At the technical level, AI governance demands rigorous standards like ISO/IEC 42001 and automated safety controls. For example, automated tools like AI red teaming (testing models for harmful behaviors), metadata logging, and monitoring are crucial as AI advances to require real-time oversight.

Amid the push for automation, however, the importance of human-AI collaboration remains key. Industry leaders are setting examples: IBM uses its AI Ethics Board and Integrated Governance Program to manage its systems while their watsonx.governance helps clients implement governance and technical controls. Similarly, Microsoft’s Responsible AI Transparency Report discloses how the company builds AI applications with accountability in mind.

The continued advancement of AI technology underscores the need for thoughtful regulation and ethical considerations to ensure the technology’s benefits to society are maximized while mitigating the risks. Companies that proactively adopt responsible AI practices, with an eye toward regulation, will be best positioned to succeed in this rapidly evolving landscape. In short, AI is far more than a mere tool; it is a transformative force reshaping the future of business, government, and society. Now is the time to understand it and embrace it wisely – because the choices we make now about governing and implementing AI will influence the world we create for the generations to come.

The global nature of artificial intelligence requires that countries align their regulatory frameworks to prevent disparities that could lead to unintended consequences. As AI systems and technologies cross borders seamlessly, inconsistent regulations can result in a fragmented global ecosystem, fostering gaps in security, innovation, and ethical governance.

Recent discussions at the United Nations underscore the urgency of a global, coordinated approach to AI governance, emphasizing that any nation acting alone cannot fully mitigate the risks AI poses to privacy, security, and human rights. A collaborative effort must balance innovation with accountability, ensuring that countries adhere to ethical standards in the use of AI.

China’s relatively lax stance on AI regulation raises concerns about the imbalance this could create in the global AI landscape. While many countries, particularly in Europe and through UN initiatives, are striving for stringent oversight, China’s approach is less restrictive, focusing on technological growth and economic dominance.

This divergence could lead to significant ethical and safety risks, as AI systems developed under more lenient standards could interact with or influence other nations with stricter regulations. To avoid the pitfalls of a disjointed regulatory environment, it is essential for global leaders to reach a consensus on key aspects of AI governance, including transparency, data protection, and accountability, to ensure that AI serves humanity as a whole.

 

© IE Insights.