The former Meta executive traces the opportunities and challenges shaping business-critical A.I. today. Photo by Kimberly Wang, Courtesy of Cohere
Joelle Pineau, recognized on this year’s A.I. Power Index, has long been one of the field’s most influential voices on reproducibility, open science and ethical frameworks in A.I. After nearly eight years leading Meta’s FAIR research division, Pineau made a high-profile move in August to Cohere as its first chief A.I. officer. Pineau is steering development of the company’s North platform and its expanding portfolio of enterprise agents, with a focus on privacy, security and interoperability with sensitive data—priorities that set it apart from rivals chasing the more nebulous goal of AGI. She brings with her the conviction that open protocols and transparent systems are practical necessities for secure, business-critical applications. Pineau also pushes back against the idea that A.I. is an inscrutable black box, arguing that enterprise systems can, in fact, be more transparent than human decision-making. Her perspective underscores a broader shift in the industry: away from speculative visions of AGI and toward the practical, secure and ethically grounded deployment of A.I. at scale.
What’s one assumption about A.I. that you think is dead wrong?
A lot of people think of A.I. as a black box, which isn’t really accurate. It’s certainly complicated and complex, but it’s not impossible to trace and understand how a prompt leads to an output. Especially in an enterprise setting, where you’re working with agents to use internal data and tools, more often than not, you’re able to see where information is coming from more easily than you could understand another human’s thought process.
If you had to pick one moment in the last year when you thought “Oh shit, this changes everything” about A.I., what was it?
The area where I’ve seen the most impressive rate of change is in A.I.-assisted software development. The ability for LLMs to generate code, to assist developers, to resolve bugs, there’s just been amazing progress in the last year, and this changes a lot of things. It opens up the door to much faster development and validation of complex systems. It increases the level of verification and transparency, since it’s now possible to ask questions in natural language about the behavior properties of software systems. And it empowers almost anyone, even with very little computer science training, to implement their ideas quickly. It also opens up the door to A.I. systems self-improving. The technology is not perfect, and there are still many years of progress ahead, but there is no going back.
How do you reconcile your commitment to open science with building proprietary enterprise A.I. solutions, and what does responsible A.I. development look like in a commercial context?
Privacy and security are really central to the conversation about responsible A.I. in a commercial context. Enterprises can’t afford to have data leak. Whether it’s internal proprietary data or sensitive customer data, a big part of my work is making agents better and more powerful without compromising security. One thing we know from many years of computer security is that often, open protocols are in fact more secure, because flaws are discovered much faster and properties are better understood. So I see open science, especially during the research and early development phase, as an essential practice to improve the privacy and security properties of enterprise A.I. solutions. And this is why Cohere Labs has been built on an open science model from the beginning.
What specific advantages do you see in Cohere’s approach to large language models, and how do you plan to differentiate from the dominant players who have significant resource advantages?
The approach Cohere is taking is more focused than players that are chasing AGI or general superintelligence, and that gives us a leg up in the enterprise market. Cohere is able to differentiate itself by focusing on the things that matter to enterprises, which have proven to be privacy, security and working well with enterprise data sources. This is particularly important in domains such as finance, healthcare, telecoms, government and many others.
You’ve spent years championing reproducible A.I. research and ethical frameworks at Meta. How are you applying those principles to Cohere’s North platform and enterprise A.I. agents, particularly around issues like bias, transparency and accountability in business-critical applications?
The goal at Cohere is for enterprises to have traceable, controllable and customizable A.I. for their systems, including North. To consistently achieve this, I’ll continue to champion rigorous testing, transparent evaluation protocols, robust performance, clear documentation. Our evaluation strategy also needs to account for both standard engineering metrics (accuracy, speed, efficiency) and broader social impact metrics (safety, bias, transparency).