In the year since ChatGPT was released, people have been figuring out what it's good at, what it's not good at, and how AI tools will change how we live and work.
The company's non-profit board and for-profit arm have long been at odds. CEO Sam Altman's week-long ouster represents the culmination of that long-simmering tension.
In recent research AI has done a credible job at diagnosing health complaints. But should consumers trust unregulated bots with their health care? Doctors see trouble brewing.
NPR's Scott Simon has an idea for newspapers experimenting with AI: hire high school journalists to cover high school games rather than settle for substandard reporting.
The news publisher and maker of ChatGPT have held tense negotiations over striking a licensing deal for the use of the paper's articles to train the chatbot. Now, legal action is being considered.
The project is called Worldcoin. It was co-founded by Sam Altman of ChatGPT fame. Its mission is to authenticate all the world's humans, one eyeball scan at a time.
ChatGPT sees its first hint of regulation as the federal agency requests documentation about its business practices.
Wednesday on Political Rewind: Artificial intelligence like ChatGPT is already changing aspects of our daily lives, but what will our future with this technology look like? Host Bill Nigut welcomes Georgia Tech's Mark Riedl and Brian Magerko to explain.
A group of economists conducted one of the first empirical studies of "generative AI" at a real-world company. They found it had big effects.
New startups believe chatbot technology could help reduce the burden on physicians. But some academics warn bias and errors could hurt patients.
Italian authorities are temporarily banning ChatGPT while it investigates the company behind the AI tool. Italy is considered the first government to take such a measure against ChatGPT.
A group of prominent computer scientists and other tech industry notables are calling for a 6-month pause to ponder the risks of powerful technology that spawned a successor to ChatGPT.
After Microsoft's powerful AI chatbot verbally attacked people, and even compared one person to Hitler, the company has decided to rein in the technology until it works out the kinks.
Google's Bard, an answer to Microsoft's ChatGPT, delivered a factual error in a search demo that the company shared widely. That sent Alphabet's market value plummeting this week.
Computers traditionally excel at rocketry, so why do new artificial intelligence programs get it wrong?