Marco Argenti, CIO at Goldman Sachs, is one of Wall Street’s most influential technology executives. Courtesy of Goldman Sachs

Marco Argenti, featured on this year’s A.I. Power Index, is at the center of Goldman Sachs’ ambitious push to integrate A.I. across one of the world’s most tightly regulated industries. As chief information officer, Argenti leads the firm’s A.I. strategy, from piloting Cognition Labs’ coding agent Devin—the first deployment of its kind by a major bank—to scaling the GS AI Assistant, an internal platform now used by every employee. While the technology is transformative, Goldman’s approach is firmly grounded in safety and control: every line of code written by Devin undergoes the same human review and rigorous testing as any developer’s work.

Under Argenti’s leadership, Goldman has introduced tools such as Translate AI and a developer copilot that boosted productivity by 20 percent in its first year, while experimental platforms like Banker Copilot are being refined for broader adoption. Already, the GS AI Assistant processes more than one million prompts per month from bankers, traders and asset managers. For Argenti, this shift is about reshaping management itself. Supervising A.I. agents requires new skills of description, delegation and oversight, a shift that places early-career workers at the forefront of the bank’s evolving hybrid workforce of humans and machines.

What’s one assumption about A.I. that you think is dead wrong?

That models’ problem-solving abilities cannot generalize.

If you had to pick one moment in the last year when you thought “this changes everything” about A.I., what was it?

The advancements in agentic A.I. have been incredible. Having A.I. that can not only answer questions but reason and plan to complete complex tasks and collaborate with other agents is a game-changer across every industry.

What’s something about A.I. development that keeps you up at night that most people aren’t talking about?

Where does the model end, and where do applications start? Depending on where you draw the line, the market for applications and software can look very different.

Goldman Sachs was the first major bank to pilot Cognition Labs’ autonomous A.I. agent Devin. What convinced you to be first, and how do you manage regulatory compliance with autonomous coding agents

Devin will produce code “merge requests” like any of our developers, the code will be reviewed by a human and go through a rigorous set of CI/CD pipeline controls before being released into production. We are committed to maintaining a strong risk management framework. The guardrails we’ve implemented help mitigate potential risks, and the use of Devin enhances our processes by reducing risk thanks to standardization and automation of software development processes.

When we first started on our generative A.I. journey at Goldman Sachs, we developed the GS AI Platform as a way to safely and securely leverage popular LLMs while incorporating our own data. By developing the platform in this way, we are well placed to build internal applications that have the potential to leverage agentic A.I. responsibly over time. A few months ago, we began collaborating with leading coding agent company Cognition Labs and have begun piloting the usage of Devin, an autonomous generative A.I. agent designed to help transform the way software is developed and maintained. Devin is now being tested in our systems in a controlled environment under the management of our engineers.

As a firm, we prioritize safety and security and believe that Devin will be able to meet our quality and control expectations while still allowing for quality and speed. We anticipate that after this initial phase and review and approval by our governance framework, we will roll out the use of this A.I. tool for specific use cases at the firm. We are looking at targeted use cases to reduce developer toil on repetitive tasks such as upgrading dependencies or migrating code from one language to another. We see agentic tools as a potential force multiplier for our people, presenting an opportunity to improve on speed and scale of our development capabilities while also improving the developer experience.

Regarding Goldman Sachs’ A.I. adoption across trading and advisory, where has A.I.’s impact been most dramatic internally?

Built on top of the GS AI Platform, the GS AI Assistant was recently scaled to all employees at the firm. The GS AI Assistant is an internal natural language conversational application that enables end users to access firm-approved Large Language Models (LLMs) in a safe and secure way using the GS AI Platform; the application is designed to enhance efficiency and increase the productivity of knowledge workers within the firm. Bankers, traders, asset managers and wealth managers at Goldman Sachs have been leveraging the GS AI Assistant. This tool puts the knowledge from a variety of different sources at the fingertips of our people—reasoned, summarized and connected to our data sources.

We also have a tool we call Banker Copilot for some of our investment bankers that is currently not scaled, but is being leveraged by a select group who are optimizing it before it can be deployed more broadly. It helps compile relevant research and provides our bankers with a conversational interface. Beyond this we have a variety of tools in various stages of evaluation and development for different pockets of our business.

How do you innovate when regulators are still figuring out the rules?

Safe and responsible A.I. is absolutely a priority for Goldman Sachs. We take a two-pronged approach to unlocking generative A.I. at Goldman Sachs through a combination of the platform we have built, the GS AI Platform, and our partnerships with leading technology companies. We execute on this through a multi-partner approach, looking strategically at different partners and relationships. We are leveraging our GS AI Platform across use cases within the firm to accelerate A.I. experimentation and deployment in a safe and secure manner. We created a series of tenets to guide our work with our platform:

  • Enable safe and compliant A.I.: The GS AI Platform enables developer access to LLMs with both our “wrapped” shield of guardrails as well as embedded controls and security. These guardrails and controls are at the core of how our use cases are developed internally.
  • Be model independent: Taking into consideration the best characteristics of each model, we allow users to choose different models depending on what model may be suited for their intended purposes.
  • Maximize accuracy: By connecting to our original data sources and making our data accessible and understandable by the A.I., we have the option to fine-tune the model with our own internal data in a safe and compliant manner that seeks to eliminate hallucinations and bias, and protect information, among other concerns.
  • Accelerate development by abstracting away complexity and reducing heavy lifting: Our platform allows for applications to be built ‘over the top,’ thus achieving a high level of standardization, faster testing and release cycles, easy access to data sources and built-in safety and compliance. Developers don’t have to start from scratch with every application, and as a result, the speed at which developers are building generative A.I. applications has shortened significantly.

The goal of this platform is to be the central engine of accelerating A.I. experimentation and adoption at the firm in a tangible, safe, responsible and compliant manner.

Through the GS AI Platform we are developing tools and GenAI enabled apps across our businesses with robust content safety, security and confidentiality guardrails.

You’re leading 12,000+ engineers toward a ‘hybrid workforce of humans and A.I. agents’ where productivity could triple or quadruple. How do you prepare managers to supervise A.I. agents alongside human employees?

Not only managers but also those who used to be individual contributors will have to develop three managerial skills at a minimum: the ability to describe, delegate and supervise.

The latter being the most critical—because giving agency to an A.I. without understanding what the A.I. produces, and not being able to critique and correct, is a recipe for failure. The shift to agentic A.I. makes early-career workers more critical than ever as they have grown up alongside generative A.I. Across the workforce we intend to invest in our talent to ensure human adoption keeps pace with the technological innovation.

Your developer copilot achieved 20 percent efficiency gains, and now every Goldman employee has access to the GS A.I. Assistant. What’s the internal resistance been like, and how do you measure ROI on firm-wide A.I. deployment?

The response internally has been excitement, not resistance. Adoption and usage are key metrics in the early stages. With the GS AI Assistant, for example, we are already seeing over one million prompts per month.

Goldman Sachs’ Marco Argenti On Why Early-Career Workers Are Key to A.I. Management


By