A year ago, Bard was just a liberal arts college in New York and Sydney was the most populous city in Australia, but times have changed. These friendly monikers now represent the immensely powerful and game-changing potential of artificial intelligence.
With “Pause Giant AI Experiments: An Open Letter,” leading tech and AI experts, including AI pioneer Yoshua Benjio, Elon Musk, Andrew Yang, and Apple co-founder Steve Wozniak, called for a hiatus on further AI development.
How’d We Get Here?
AI has played a crucial role in our lives for some time now. Can you imagine life without Siri, Alexa, Netflix recommendations, email spam filters, and Google search? So, what has changed?
The release of Open AI’s ChatGPT in late 2022 opened the floodgates to a new wave of AI seemingly overnight. Open AI backer Microsoft soon followed with Bing Chat, which uses ChatGPT technology. And before long Google opened access to Bard for a select number of people.
While powered by similar large language modeling (LLM) to that of predecessors, continued AI development does pose increasingly profound risks, including:
- Lack of Creator Control
Not even the creators of this technology can fully understand or control these new systems.
- National Cybersecurity
These tools could enable malicious code creation.
- Data Privacy
Leak risks will only continue to grow as more enterprise strategy, IP, and confidential data are fed into large LLMs.
- Biased & Discriminatory Results
These LLMs draw from data that might include unwanted biases and consequently spread and reinforce those biases.
Meanwhile, the intense race to create even more powerful tools have brought these concerns to a fever pitch.
What Should Come Next?
AI critics have largely inflated the power of ChatGPT and similar generative AI systems. We’re still a far cry from the sci-fi-esque “artificial general intelligence” that can solve problems they weren’t trained to solve. Now, this doesn’t mean that continued development shouldn’t be closely monitored, but rather a full-blown pause would be an overreaction as it stands currently. For instance, AI is all around us yet these “nonhuman minds” have yet to make us “obsolete.”
Lawmakers and regulators have historically been slow to fully understand and think critically about emerging technology. There’s little to suggest AI would be any different. As a result, any move would be hasty and uninformed. And in fact, the only member of Congress with an advanced degree in artificial intelligence, Rep. Jay Obernolte, R-Calif., agrees:
“Before we can create a regulatory framework around AI, we have to very explicit about what our goals are with our regulation. In other words, what kind of bad behavior and bad outcomes are we trying to prevent? What are we afraid might happen?”
It’s also worth noting that a pause, be it self-enforced or government-mandated, wouldn’t extend to malicious actors or foreign countries. A six-month advantage for these players would only be to America’s detriment.
The letter’s long-term concerns are certainly valid but until government officials can better understand AI and establish thoughtful goals for laws and regulations, the current risks of AI development pale in comparison to the implications of a perfunctory six-month pause.