ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

Today OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devised a way to identify how it stores certain concepts—including those that might perhaps cause an AI system to misbehave.

Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the long-term risks posed by the technology.

The former group’s coleads Ilya Sutskever and Jan Leike, both of whom have left the OpenAI, are named as coauthors. Sutskever, a cofounder of the company and formerly chief scientist, was among the board members who voted to fire OpenAI CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

“Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work write in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models including ChatGPT could perhaps be used to design chemical or biological weapons and coordinate cyber attacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work and a visualization tool that can be used to see how the words in different sentences activate concepts including profanity and erotic content in GPT-4 and another model. Knowing how a model represents certain concepts could be a step towards being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI Tools Like GitHub Copilot Are Rewiring Coders’ Brains. Yours May Be Next

Many people—like, say, journalists—are understandably antsy about what generative artificial intelligence might…

Selective Forgetting Can Help AI Learn Better

The original version of this story appeared in Quanta Magazine. A team…

The $50 Billion Musk Referendum

The pay package is just one in a series of measures that…

How to Resist the Temptation of AI When Writing

Your local public library is a great source of free information, journals,…