How Carol Juel is leading Synchrony through the hazards of generative AI

Carol Juel, CTO and COO, Synchrony Financial
She is mindful of the risks of generative AI, but at the same time, Carol Juel, chief technology officer and chief operating officer at Synchrony Financial says, it's important to allow employees to explore it. 

In Synchrony Financial's newly decked out Experience Center overlooking Bryant Park in New York City, Carol Juel, chief technology officer and chief operations officer at the $108 billion-asset bank, leads a team testing innovative ideas and technologies. 

Juel is not actually there all the time; like the rest of the staff, she works in a hybrid manner, some days from home and some from the company's Stamford, Connecticut, headquarters. During a recent visit, one employee, Lisa, rolled in on a Double Robotics video conferencing robot while working from her home in Atlanta.

Juel leads digital and cultural transformation work across the organization and has spent several billion dollars on this work, using hackathons, incubators and accelerators. During an interview Tuesday, she showed off the new space and shared some of her plans for generative AI. 

Where are you using or thinking of using advanced AI, such as ChatGPT, at Synchrony?

CAROL JUEL: I think at this point last year, probably less than 10% of the population actually understood and appreciated the concept of generative AI. And then ChatGPT came on the scene in November, and then obviously the iterations of that. And I think it is an enabler and an accelerant at the same time. How do you explain a large language model and how do you test and learn in that? And how does something actually get created? Because the idea of generative AI is taking information to potentially create something that doesn't exist. 

And the hallucination issue, for a bank, is tricky. 

It is, very much. We have a very strong test and learn approach to this. Like most new technologies, you have to limit it for folks that may not fully understand the power and could do something unintentional. So as a good steward and as a company, you have to protect against that. But at the same time, you have to allow for your employees to innovate, to think about it, to explore it in new ways. So how we approach this is, our AI teams are aligned with our innovation or incubation teams. 

That alignment is tied to how we think about product and development in the future. A couple of months ago, we had our first generative AI hackathon. It was our most oversubscribed hackathon. We had more than 300 people in 30 teams working across the globe. At a hackathon, you bring in technology and the different teams develop use cases to apply that to many different aspects of your business. Normally across 30 teams, maybe eight or nine themes would come out across all those teams. Among these 30 teams, there were 30 different ideas. That's because generative AI can be applied to so many different things, like helping customers select products through a marketplace and in customer service. 

We're taking back about four or five of these ideas into the incubation lab and we're going to start to vet them, working very closely with the AI working group that's helping us define some of the guardrails. Because I love the excitement around it, too. We've all used it at home and I don't want my kids writing their middle school papers like this, but I do like the idea of this augmentation of information that can, in theory, used in the right way, improve the experience, leverage data that you have, connect it with other sources. It's an exciting time in the space we're living in because it's changing so rapidly. 

What did people use for their test base at the hackathon? Did they actually use OpenAI's ChatGPT? 

We did.

So you're using that internally just for testing and experimenting and such? 

Yes. And obviously, the thing you have to be very careful of is, anything you put into it is part of the model. Therein lies the risk. So by accident, someone sharing something that is either intellectual property or confidential client data, then it becomes part of the large language model. One of the things we had to do before the hackathon was give everyone a training session about what type of information is appropriate in order for us to experiment and learn, and what are the things that we wouldn't want to ask it or say to it, because there's no way to pull it back. And I think that's the part of this technology, where the power is, but where the risk is.

With every new technology, there is potential risk. But I think this one, because there is so much unknown, and the potential for either misinterpretation or misuse, we just have to be very cognizant. 

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER