Skip to content

Technology |
Big Tech muscles in: The 12 months that changed Silicon Valley forever

ChatGPT's release a year ago triggered a desperate scramble among tech companies and alarm from some of the people who helped to invent it. (Hokyoung Kim, The New York Times)
ChatGPT’s release a year ago triggered a desperate scramble among tech companies and alarm from some of the people who helped to invent it. (Hokyoung Kim, The New York Times)

At 1 p.m. on a Friday shortly before Christmas last year, Kent Walker, Google’s top lawyer, summoned four of his employees and ruined their weekend.

The group worked in SL1001, a bland building with a blue glass facade betraying no sign that dozens of lawyers inside were toiling to protect the interests of one of the world’s most influential companies. For weeks they had been prepping for a meeting of powerful executives to discuss the safety of Google’s products. The deck was done. But that afternoon Walker told his team the agenda had changed, and they would have to spend the next few days preparing new slides and graphs.

In fact, the entire agenda of the company had changed — all in nine days. Sundar Pichai, Google’s CEO, had decided to ready a slate of products based on artificial intelligence — immediately. He turned to Walker, the same lawyer he was trusting to defend the company in a profit-threatening antitrust case in Washington, D.C. Walker knew he would need to persuade the Advanced Technology Review Council, as Google called the group of executives, to throw off their customary caution and do as they were told.

It was an edict, and edicts didn’t happen very often at Google. But Google was staring at a real crisis. Its business model was potentially at risk.

What had set off Pichai and the rest of Silicon Valley was ChatGPT, the artificial intelligence program that had been released on Nov. 30, 2022, by an upstart called OpenAI. It had captured the imagination of millions of people who had thought AI was science fiction until they started playing with the thing. It was a sensation. It was also a problem.

Google had been developing its own AI technology that did many of the same things. Pichai was focused on ChatGPT’s flaws — that it got stuff wrong, that sometimes it turned into a biased pig. What amazed him was that OpenAI had gone ahead and released it anyway, and that consumers loved it. If OpenAI could do that, why couldn’t Google?

For tech company bosses, the decision of when and how to turn AI into a (hopefully) profitable business was a more simple risk-reward calculus. But to win, you had to have a product.

By Monday morning, Dec. 12, the team at SL1001 had a new agenda with a deck labeled “Privileged and Confidential/Need to Know.” Most attendees tuned in over videoconference. Walker started the meeting by announcing that Google was moving ahead with a chatbot and AI capabilities that would be added to cloud, search and other products.

“What are your concerns? Let’s get in line,” Walker said, according to Jen Gennai, the director of responsible innovation.

Eventually a compromise was reached. They would limit the rollout, Gennai said. And they would avoid calling anything a product. For Google, it would be an experiment. That way it didn’t have to be perfect. (A Google spokesperson said the ATRC did not have the power to decide how the products would be released.)

What played out at Google was repeated at other tech giants after OpenAI released ChatGPT in late 2022. They all had technology in various stages of development that relied on neural networks — AI systems that recognized sounds, generated images and chatted like a human. That technology had been pioneered by Geoffrey Hinton, an academic who had worked briefly with Microsoft and was now at Google. But the tech companies had been slowed by fears of rogue chatbots, and economic and legal mayhem.

Once ChatGPT was unleashed, none of that mattered as much, according to interviews with more than 80 executives and researchers, as well as corporate documents and audio recordings. The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.

Over 12 months, Silicon Valley was transformed. Turning artificial intelligence into actual products that individuals and companies could use became the priority. Worries about safety and whether machines would turn on their creators were not ignored, but they were shunted aside — at least for the moment.

“Speed is even more important than ever,” Sam Schillace, a top executive, wrote Microsoft employees. It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”

The strange thing was that the leaders of OpenAI never thought ChatGPT would shake up Silicon Valley. In early November 2022, a few weeks before it was released to the world, it didn’t really exist as a product. Most of the 375 employees working in their new offices, a former mayonnaise factory, were focused on a more powerful version of technology, called GPT-4, that could answer almost any question using information gleaned from an enormous collection of data scraped from seemingly everywhere.

In mid-November 2022, OpenAI CEO Sam Altman; Greg Brockman, its president; and others met in a top-floor conference room to discuss the problems with their breakthrough tech yet again. Suddenly Altman made the decision — they would release the old, less-powerful technology.

On Nov. 29, the night before the launch, Brockman hosted drinks for the team. He didn’t think ChatGPT would attract a lot of attention, he said. His prediction: “no more than one tweet thread with 5k likes.”

Brockman was wrong. On the morning of Nov. 30, Altman tweeted about OpenAI’s new product, and the company posted a jargon-heavy blog item. And then, ChatGPT took off. Almost immediately, sign-ups overwhelmed the company’s servers. Engineers rushed in and out of a messy space near the office kitchen, huddling over laptops to pull computing power from other projects. In five days, more than 1 million people had used ChatGPT. Within a few weeks, that number would top 100 million. Though nobody was quite sure why, it was a hit. Network news programs tried to explain how it worked. A late-night comedy show even used it to write (sort of funny) jokes.

Mark Zuckerberg’s head was elsewhere. He had spent the entire year reorienting the company around the metaverse and was focused on virtual and augmented reality.

But ChatGPT would demand his attention. His top AI scientist, Yann LeCun, arrived in the Bay Area from New York about six weeks later for a routine management meeting at Meta, according to a person familiar with the meeting.

In Paris, LeCun’s scientists had developed an AI-powered bot that they wanted to release as open-source technology. Open source meant that anyone could tinker with its code. They called it Genesis. But when they sought permission to release it, Meta’s legal and policy teams pushed back, according to five people familiar with the discussion.

Caution versus speed was furiously debated among the executive team in early 2023 as Zuckerberg considered Meta’s course in the wake of ChatGPT.

Zuckerberg wanted to push out a project fast. Genesis was changed to LLaMA, short for “Large Language Model Meta AI,” and released to 4,000 researchers outside the company. Soon Meta received over 100,000 requests for access to the code.

But within days of LLaMA’s release, someone put the code on 4chan, the fringe online message board. Meta had lost control of its chatbot, raising the possibility that the worst fears of its legal and policy teams would come true. Researchers at Stanford University showed that the Meta system could easily do things like generate racist material.

On June 6, Zuckerberg received a letter about LLaMA from Sens. Josh Hawley, R-Mo., and Richard Blumental, D-Conn. “Hawley and Blumental demand answers from Meta,” said a news release.

The letter called Meta’s approach risky and vulnerable to abuse and compared it unfavorably with ChatGPT. Why, the senators seemed to want to know, couldn’t Meta be more like OpenAI?

At the end of the summer of 2022, Microsoft’s offices weren’t yet back to their pre-pandemic bustle. But on Sept. 13, Satya Nadella summoned his top executives to a meeting at Building 34, Microsoft’s executive nerve center. It was two months before Altman made the decision to release ChatGPT.

Nadella took the lectern to tell his lieutenants that everything was about to change. This was an executive order from a leader who typically favored consensus. “We are pivoting the whole company on this technology,” Eric Horvitz, the chief scientist, later remembered him saying. “This is a central advancement in the history of computing, and we are going to be on that wave at the front of it.”

It all had to stay secret for the time being. Three “tented projects” were set up in early October to get the big pivot started. They were devoted to cybersecurity, the Bing search engine, Microsoft Word and related software.

Microsoft invited journalists to its Redmond, Washington, campus on Feb. 7 to introduce a chatbot in Bing to the world. They were instructed not to tell anybody they were going to a Microsoft event, and the topic wasn’t disclosed.

But somehow, Google found out. On Feb. 6, to get out ahead of Microsoft, it put up a blog post by Pichai announcing that Google would be introducing its own chatbot, Bard. It didn’t say exactly when.

By the morning of Feb. 8, the day after Microsoft announced the chatbot, its shares were up 5%. But for Google, the rushed announcement became an embarrassment. Researchers spotted errors in Google’s blog post. An accompanying GIF simulated Bard saying that the Webb telescope had captured the first pictures of an exoplanet, a planet outside the solar system. In fact, a telescope at the European Southern Observatory in northern Chile got the first image of an exoplanet in 2004. Bard had gotten it wrong, and Google was ribbed in the news media and on social media.

It was, as Pichai later said in an interview, “unfortunate.” Google’s stock dropped almost 8%, wiping out more than $100 billion in value.

Hinton, Google’s best-known scientist, had always poked fun at doomers, rationalists and effective altruists who worried that AI would end humankind in the near future. He had developed much of the science behind artificial intelligence as a professor at the University of Toronto and became a wealthy man after joining Google in 2013. He is often called the godfather of AI.

But the new chatbots changed everything for him. The science had moved more quickly than he had expected. Microsoft’s introduction of its chatbot convinced him that Google would have no choice but to try to catch up. And the corporate race shaping up between tech giants seemed dangerous.

“If you think of Google as a company whose aim is to make profits,” Hinton said in April, “they can’t just let Bing take over from Google search. They’ve got to compete with that. When Microsoft decided to release a chatbot as the interface for Bing, that was the end of the holiday period.”

For the first time in more than 50 years, he stepped away from research. And then in April, he called Pichai and said goodbye.

This article originally appeared in The New York Times.

Get more business news by signing up for our Economy Now newsletter.