OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022.
Jonathan Raa | Nurphoto | Getty Images
Attendees of the annual World Economic Forum couldn’t get enough of a new development in the realm of artificial intelligence: generative AI.
Priya Lakhani, CEO of online learning platform Century, said educators flocked to social media moments after ChatGPT came out talking about AI and how it could affect the education sector.
“It’s really amazing actually. What I’ve seen across social media conversations is that there are educators who are seeing it as an enabler, and that’s fascinating,” Lakhani said during a WEF panel discussing the potential and pitfalls of generative AI.
“They’ve gotten over the digital fatigue after the pandemic, they’re interested in the technology, they’re using learning management systems, virtual learning environments, and they’re thinking, OK, how can we use this and how can we use it as an enabler across different contacts.”
Most machine learning tools rely on existing information and identify patterns in the data to pick out trends or reach a preferred outcome. Recommendation algorithms on social apps like Facebook and TikTok serve users ads based on their browsing behavior.
Generative AI tools like ChatGPT and Dall-E stand out from the crowd through their ability to take data inputs and create new content. People have used the technology to generate everything from college essays to works of art.
Using services like Lensa AI to turn selfies into a variety of sci-fi and anime-inspired avatars has also proven popular.
Generative AI has big implications for the way children learn, said Lakhani, adding the technology has also heightened the risk of cheating and plagiarism.
“Then you get the skeptics who are absolutely terrified, right?” she said. “They’re terrified because they’re thinking, hang on, kids are going to cheat on their homework. That has real-world implications.”
This week at the WEF forum in Davos, Switzerland, generative AI virtually replaced crypto and so-called “Web3” as the hyped technology of choice for top business executives and policymakers.
“Generative AI has a huge potential,” said Hiroaki Kitano, CEO of Sony Computer Science Laboratories, on Tuesday’s generative AI panel.
“This is not just something coming up all of a sudden. We have a long history of deep learning,” Kitano said. “This is like a continuous evolution of the AI capability.”
Microsoft is reportedly betting billions on generative AI in hopes that it will be transformative for its business — and others as well. Last week, news site Semafor reported that the company was planning to invest $10 billion in ChatGPT creator OpenAI in a deal valuing the company at $29 billion.
Not everyone is convinced by the billions suddenly sloshing around in generative AI.
Jim Breyer, founder and CEO of Breyer Capital, said that Microsoft’s investment in Open AI was good for the company from a strategic standpoint — but he believes the Redmond tech giant is overpaying.
“It’s a sign to me of the froth. It’s a strategic deal for Microsoft, and they’re going to catch up quickly to Google and others,” Breyer told CNBC’s Sara Eisen Thursday.
“However, I can’t justify the valuation as a private investor.”
It’s easy to see why Microsoft is excited. ChatGPT has shown the ability to come up with more creative answers than tools that produce mainly generic responses to user queries.
Take, for instance, someone wanting to know what to do for their child’s birthday party. ChatGPT could devise a plan for the day, including advice on what sort of cake to buy or games to play.
In that sense, ChatGPT has been touted as a Google disruptor that users can turn to, instead of heading to the search engine pioneer. The chatbot’s novel responses has even prompted questions whether its rationalization process may evidence human-like cognition.
Altman has admitted the limitations of ChatGPT, tweeting in December that it was “a mistake to be relying on it for anything important right now.”
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” Altman said at the time.
ChatGPT’s limitations include factual errors. Sony’s Kitano also said it was important to recognize those constraints.
“At the same time, we see a lot of limitations. If you ask ChatGPT a specific question, sometimes answers are impressive. But if you go into the details, all the factual things may not be that accurate,” he said.
“If you go back and open the PC and ask about yourself, you see like, ‘Oops, I don’t get this,’ all kinds of things are going on there.”
Without directly confirming the investment Tuesday, Microsoft head Brad Smith said generative tools like ChatGPT have already sparked conversations about legal and ethical quandaries.
“What one really needs to start to imagine is, what are the various ways this technology can be used? How can it be used for good, how can it be used to create challenges?” Smith said in a panel moderated by CNBC’s Karen Tso Tuesday.
One concern is that generative AI may become a desirable weapon for hackers and other bad actors, such as online disinformation operatives.
Researchers at cybersecurity firm Check Point say ChatGPT is already being used by hackers to recreate common malware strains.
“We may find that it will become a more relevant topic as people are thinking about the future of information, potential influence operations, people creating disinformation and also combating it as well,” Smith said.