Research Scientist Andrei Barbu Gives Us Input On LLM Design

June 8, 2024

John Werner - Contributor
I am an MIT Senior Fellow, 5x-founder & VC investing in AI

Today I heard from MIT research scientist Andrei Barbu about working with LLMs, and how to prevent certain kinds of problems related to data leaks.

“We’re interested in studying language in particular,” he said of his work, turning to some of the goals of research teams in this area.

Thinking about the duality of human and computer cognition, he pointed out some differences. Humans, he said, teach one another, for example. Another thing humans can do is keep secrets. That might be challenging for our digital counterparts.

“There's a problem with LLMs,” he noted, explaining their potential for leaks. “They can’t keep a secret.”

In outlining how to identify the issue, Barbu talked about prompt injection attacks as a prime example. We heard something a little like this from Adam Chipala just prior, where he mentioned verifiable software principles as a potential solution.

Barbu brought up something that I found interesting, though: a kind of catch-22 for systems that are not very good at sealing in data.

“Models are as sensitive as the most sensitive piece of data put inside that model,” he explained. “People can interrogate the model. … (on the other hand, models are) as weak and vulnerable to attack as the least sensitive piece that you put in...”

Read the full story on the website using the link below.

Associated CBMM Pages: