OUR PARTNERS
Tech Billionaire Raises Alarm Over AI Misuse By Non-Democratic States
04 July, 2024
The rapid advancement of artificial intelligence (AI) has brought about a paradigm shift in various sectors, from oil and gas to defense and intelligence. However, the potential misuse of AI technology has raised concerns among industry leaders, academics, and the general public. In a recent interview, Tom Siebel, the CEO of C3.ai, expressed his apprehensions about the risks associated with AI, especially in the context of warfare. Siebel’s company provides AI solutions to several oil and gas companies as well as the U.S. defense and intelligence communities. However, he is cautious about dealing with nations that are not democratic allies, fearing the potential misuse of AI technologies.
The MIT Sloan Management Review and Boston Consulting Group recently conducted a panel discussion with AI experts. The majority of the panelists were hesitant to agree that companies are making adequate investments in responsible AI, despite growing awareness of its risks. These concerns stem from various issues like bias in AI, hallucinations, and drift.
Bias in AI refers to the systematic unfairness towards certain groups of people, which can disproportionately harm marginalized communities. Hallucinations refer to AI’s ability to perceive patterns undetectable to humans and create inaccurate outputs. Meanwhile, drift refers to the unpredictable behavior of large language models that require constant recalibration.
A recent survey by Pew Research Center revealed that 52% of Americans are more concerned than excited about the increased use of AI. A significant majority (71%) opposed the use of AI in making final hiring decisions. Women tend to view AI more negatively than men.
Despite these concerns, many organizations consider risk factors a critical aspect when evaluating new uses of AI tools. Steve Mills, BCG’s chief AI ethics officer, emphasizes that the goal should not be to replace human workers but to enhance their jobs by pairing them with AI. This approach allows for increased productivity and harnesses human creativity and ingenuity.
AI’s effectiveness is maximized when there’s human oversight. Many companies have employees rigorously review the AI models they create or use, ensuring privacy and data security. Tech giants are also keen to share their responsible AI ethos publicly to alleviate concerns about the rapidly evolving technology.
Rob Thomas, senior vice president of software and chief commercial officer at IBM, stresses the importance of transparency in AI development and data sourcing. He believes regulation should oversee AI use cases, not the technology’s development. Additionally, governance is crucial for understanding how models are performing.
In September, German software giant SAP introduced a new generative AI co-pilot called Joule. This AI images generator is being integrated into applications ranging from supply chain management to finance and procurement. SAP is also aware of the potential bias in large language models and has invested resources in mitigation efforts.
AI’s potential extends beyond business and industry. Tony Habash, chief information officer at the American Psychological Association, sees promising uses for AI in psychology, from AI-powered note-taking to providing treatment indicators for therapists. He believes that the human-machine relationship will be the most significant change moving forward.
In conclusion, while there are valid concerns about the potential misuse of artificial intelligence generated images and other AI tools, there is also a growing awareness of the need for responsible AI. As we continue to navigate this rapidly evolving field, it is crucial to strike a balance between leveraging AI’s benefits and mitigating its risks. This will require ongoing discussions, training, testing, and most importantly, a commitment to ethical and responsible AI use.