World

How the Collapse of Sam Bankman-Fried’s Crypto Empire Has Disrupted A.I.

SAN FRANCISCO — In April, a San Francisco artificial intelligence lab called Anthropic raised $580 million for research involving “A.I. safety.”

Few in Silicon Valley had heard of the one-year-old lab, which is building A.I. systems that generate language. But the amount of money promised to the tiny company dwarfed what venture capitalists were investing in other A.I. start-ups, including those stocked with some of the most experienced researchers in the field.

The funding round was led by Sam Bankman-Fried, the founder and chief executive of FTX, the cryptocurrency exchange that filed for bankruptcy last month. After FTX’s sudden collapse, a leaked balance sheet showed that Mr. Bankman-Fried and his colleagues had fed at least $500 million into Anthropic.

Their investment was part of a quiet and quixotic effort to explore and mitigate the dangers of artificial intelligence, which many in Mr. Bankman-Fried’s circle believed could eventually destroy the world and damage humanity. Over the past two years, the 30-year-old entrepreneur and his FTX colleagues funneled more than $530 million — through either grants or investments — into more than 70 A.I.-related companies, academic labs, think tanks, independent projects and individual researchers to address concerns over the technology, according to a tally by The New York Times.

Now some of these organizations and individuals are unsure whether they can continue to spend that money, said four people close to the A.I. efforts who were not authorized to speak publicly. They said they were worried that Mr. Bankman-Fried’s fall could cast doubt over their research and undermine their reputations. And some of the A.I. start-ups and organizations may eventually find themselves embroiled in FTX’s bankruptcy proceedings, with their grants potentially clawed back in court, they said.

The concerns in the A.I. world are an unexpected fallout from FTX’s disintegration, showing how far the ripple effects of the crypto exchange’s collapse and Mr. Bankman-Fried’s vaporizing fortune have traveled.

Sam Bankman-Fried during the DealBook Summit on Wednesday. His attempts to influence artificial intelligence are part of a philanthropic philosophy that he trumpeted known as effective altruism.Credit…Hiroko Masuike/The New York Times

“Some might be surprised by the connection between these two emerging fields of technology,” Andrew Burt, a lawyer and visiting fellow at Yale Law School who specializes in the risks of artificial intelligence, said of A.I. and crypto. “But under the surface, there are direct links between the two.”

Mr. Bankman-Fried, who faces investigations into FTX’s collapse and who spoke at The Times’s DealBook conference on Wednesday, declined to comment. Anthropic declined to comment on his investment in the company.

Mr. Bankman-Fried’s attempts to influence A.I. stem from his involvement in “effective altruism,” a philanthropic movement in which donors seek to maximize the impact of their giving for the long term. Effective altruists are often concerned with what they call catastrophic risks, such as pandemics, bioweapons and nuclear war.

The Aftermath of FTX’s Downfall

The sudden collapse of the crypto exchange has left the industry stunned.

  • A Spectacular Rise and Fall: Who is Sam Bankman-Fried and how did he become the face of crypto? The Daily charted the spectacular rise and fall of the man behind FTX.
  • Clinging to Power: Emails and text messages show how FTX lawyers and executives struggled to persuade Mr. Bankman-Fried to give up control of his collapsing company.
  • Collateral Damage: BlockFi, a cryptocurrency lender that targeted ordinary investors eager for a piece of the crypto mania, filed for bankruptcy on Nov. 28, felled by its financial ties to FTX.
  • A Symbiotic Relationship: Mr. Bankman-Fried’s built FTX partly to help the trading business of Alameda Research, his first company. The ties between the two entities are now coming under scrutiny.

Their interest in artificial intelligence is particularly acute. Many effective altruists believe that increasingly powerful A.I. can do good for the world, but worry that it can cause serious harm if it is not built in a safe way. While A.I. experts agree that any doomsday scenario is a long way off — if it happens at all — effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, companies and governments should prepare for it.

Over the last decade, many effective altruists have worked inside top A.I. research labs, including DeepMind, which is owned by Google’s parent company, and OpenAI, which was founded by Elon Musk and others. They helped create a research field called A.I. safety, which aims to explore how A.I. systems might be used to do harm or might unexpectedly malfunction on their own.

Effective altruists have helped drive similar research at Washington think tanks that shape policy. Georgetown University’s Center for Security and Emerging Technology, which studies the impact of A.I. and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruist giving organization backed by a Facebook co-founder, Dustin Moskovitz. Effective altruists also work as researchers inside these think tanks.

Mr. Bankman-Fried has been a part of the effective altruist movement since 2014. Embracing an approach called earning to give, he told The Times in April that he had deliberately chosen a lucrative career so he could give away much larger amounts of money.

In February, he and several of his FTX colleagues announced the Future Fund, which would support “ambitious projects in order to improve humanity’s long-term prospects.” The fund was led partly by Will MacAskill, a founder of the Center for Effective Altruism, as well as other key figures in the movement.

Will MacAskill, a founder of the Center for Effective Altruism, gave a TED Talk in 2018.Credit…Lawrence Sumulong/Getty Images

The Future Fund promised $160 million in grants to a wide range of projects by the beginning of September, including in research involving pandemic preparedness and economic growth. About $30 million was earmarked for donations to an array of organizations and individuals exploring ideas related to A.I.

Among the Future Fund’s A.I.-related grants was $2 million to a little-known company, Lightcone Infrastructure. Lightcone runs the online discussion site LessWrong, which in the mid-2000s began exploring the possibility that A.I. would one day destroy humanity.

Mr. Bankman-Fried and his colleagues also funded several other efforts that were working to mitigate the long-term risks of A.I., including $1.25 million to the Alignment Research Center, an organization that aims to align future A.I. systems with human interests so that the technology does not go rogue. They also gave $1.5 million for similar research at Cornell University.

The Future Fund also donated nearly $6 million to three projects involving “large language models,” an increasingly powerful breed of A.I. that can write tweets, emails and blog posts and even generate computer programs. The grants were intended to help mitigate how the technology might be used to spread disinformation and to reduce unexpected and unwanted behavior from these systems.

After FTX filed for bankruptcy, Mr. MacAskill and others who ran the Future Fund resigned from the project, citing “fundamental questions about the legitimacy and integrity of the business operations” behind it. Mr. MacAskill did not respond to a request for comment.

Beyond the Future Fund’s grants, Mr. Bankman-Fried and his colleagues directly invested in start-ups with the $500 million financing of Anthropic. The company was founded in 2021 by a group that included a contingent of effective altruists who had left OpenAI. It is working to make A.I. safer by developing its own language models, which can cost tens of millions of dollars to build.

Some organizations and individuals have already received their funds from Mr. Bankman-Fried and his colleagues. Others got only a portion of what was promised to them. Some are unsure whether the grants will have to be returned to FTX’s creditors, said the four people with knowledge of the organizations.

Oren Etzioni of the Allen Institute for Artificial Intelligence said the effective altruist community sometimes made today’s technologies seem more powerful or more dangerous than they really were.Credit…Kyle Johnson for The New York Times

Charities are vulnerable to clawbacks when donors go bankrupt, said Jason Lilien, a partner at the law firm Loeb & Loeb who specializes in charities. Companies that receive venture investments from bankrupt companies may be in a somewhat stronger position than charities, but they are also vulnerable to clawback claims, he said.

Dewey Murdick, the director of the Center for Security and Emerging Technology, the Georgetown think tank that is backed by Open Philanthropy, said effective altruists had contributed to important research involving A.I.

“Because they have increased funding, it has increased attention on these issues,” he said, citing how there is more discussion over how A.I. systems can be designed with safety in mind.

But Oren Etzioni of the Allen Institute for Artificial Intelligence, a Seattle A.I. lab, said that the views of the effective altruist community were sometimes extreme and that they often made today’s technologies seem more powerful or more dangerous than they really were.

He said the Future Fund had offered him money this year for research that would help predict the arrival and risks of “artificial general intelligence,” a machine that can do anything the human brain can do. But that idea is not something that can be reliably predicted, Mr. Etzioni said, because scientists do not yet know how to build it.

“These are smart, sincere people committing dollars into a highly speculative enterprise,” he said.

Related Articles

Back to top button