Focus Taiwan App
Download

AI risky but has potential to improve quality of life: AI experts

09/25/2023 10:46 PM
To activate the text-to-speech service, please first agree to the privacy policy below.
Andrew Ng, founder of DeepLearning,AI and co-founder and head of Google Brain gives a speech at a forum on the new technology in Taipei Monday. CNA photo Sept. 25, 2023
Andrew Ng, founder of DeepLearning,AI and co-founder and head of Google Brain gives a speech at a forum on the new technology in Taipei Monday. CNA photo Sept. 25, 2023

Taipei, Sept. 25 (CNA) Artificial intelligence experts speaking at a Taipei forum Monday on the AI revolution said that while AI poses risks, they had faith people would work together to control the tool and use it to help solve problems facing humanity.

Among those experts were Sam Altman, CEO of OpenAI, the company that released ChatGPT in November 2022, and Andrew Ng, founder of DeepLearning.AI and cofounder and head of Google Brain.

Ng said there were many risks, the biggest of which were related to the social impact of AI, such as "bias, fairness, and inaccuracy," that people have voiced concerns about.

"But AI technology is rapidly improving and becoming much safer," Ng said at the forum held by the Foxconn-affiliated Yonglin Charity Foundation on how the AI revolution will shape the future.

When asked about AI's risks, Altman agreed. He said things go wrong with every new technology, and "we think it is important to make mistakes, not just OpenAI, but society's collective mistakes while the stakes are low."

"We put these tools out in the world and see how people use them and abuse them in ways we didn't think about, and every time we put out a new system, we want to get better and better," Altman said.

"So GPT-3 had many flaws, GPT-4 has less flaws, and GPT-N will have even less, and in this way systems get safer," Altman said.

OpenAI CEO Sam Altman takes part in a conversation at the AI forum in Taipei Monday. CNA photo Sept. 25, 2023
OpenAI CEO Sam Altman takes part in a conversation at the AI forum in Taipei Monday. CNA photo Sept. 25, 2023

Mark Chen, head of Frontiers Research at OpenAI, added that, "if the last five years of safety research has taught us anything, it's that you really can't do safety research in a vacuum."

Chen said that what is needed is for AI to actually be deployed in real-life situations, and see how people actually use it.

"I think this also underscores why international cooperation is important," he said.

On the particular risk of AI reinforcing biases that people already have as it is trained by using existing language data, Altman said it is possible to input human feedback and set boundaries.

"No two people on earth will probably ever agree that one system is unbiased," he argued, saying it was very important to make systems perform reliably within boundaries agreed upon by society as a whole.

He acknowledged that if one just trains a model on the raw internet, "you're probably going to get what reasonable people would call biased in different ways."

But there have been techniques developed, "like reinforcement on human feedback...[through which] you can surprisingly steer these models," Altman said, noting that even experts on bias have said GPT-4 "is less biased than human experts on [certain] topics."

Andrew Ng, founder of DeepLearning,AI and co-founder and head of Google Brain, talks about AI in a recent CNA interview in the U.S.

Ng also addressed the question he said he has encountered many times, which is the worry of "AI leading to human extinction," a worry he thinks is "overhyped."

He said Al would probably one day be smarter than any of us, "and is already smarter than any of us in some ways."

"But humanity has lots of experience controlling things far more powerful than ourselves, like corporations and nation states, and we've managed to make it okay, so I don't really doubt our ability to control AI," Ng said.

In fact, Ng, contended, when it comes to real existential risks to humanity, "such as the next pandemic or climate change, AI will be a key part of the solution."

Altman concurred. "I think we have, as a society, successfully confronted very risky technologies and very powerful systems and institutions before, and will do that again."

While the advent of new technologies does not mean there will be no disruption, that is precisely the reason why it is important to "gradually adapt," he said, opposing the position advocated by some people in the field to "basically build AGI [artificial general intelligence] secretly and then push the button one day."

Jay Lee, Clark Distinguished Chair professor and director of the Industrial AI Center at the University of Maryland, gives a keynote speech during the AI forum in Taipei Monday. CNA photo Sept. 25, 2023
Jay Lee, Clark Distinguished Chair professor and director of the Industrial AI Center at the University of Maryland, gives a keynote speech during the AI forum in Taipei Monday. CNA photo Sept. 25, 2023

Regarding AGI, which Ng described as "AI that can do any intellectual task that any human can," Ng felt it was still "many decades away," but Altman disagreed.

"I think the rate of change will be fast. I don't think AGI is decades away. Each year more amazing things will be possible that weren't possible the year before. [When] we look backwards, you know, most of the world didn't predict GPT."

"I think we're onto something pretty deep and fundamental. And it'll be great," he said.

Also taking part in Monday's forum was Jay Lee, Clark Distinguished Chair professor and director of the Industrial AI Center at the University of Maryland, who shared cases of AI implementation that helped industries perform better.

(By Alison Hsiao)

Enditem/ls

    0:00
    /
    0:00
    We value your privacy.
    Focus Taiwan (CNA) uses tracking technologies to provide better reading experiences, but it also respects readers' privacy. Click here to find out more about Focus Taiwan's privacy policy. When you close this window, it means you agree with this policy.