Begin typing your search...

    For whose sake: OpenAI making very cynical choice

    He said that ads “fundamentally misalign a user’s incentives with the company providing the service”

    For whose sake: OpenAI making very cynical choice
    X
    Representative Image

    Last spring, Sam Altman, the chief executive of OpenAI, sat in the chancel of Harvard Memorial Church, sermonizing against advertising. “I will disclose, just as a personal bias, that I hate ads,” he began in his usual calm cadence, one sneakered foot crossed onto his lap. He said that ads “fundamentally misalign a user’s incentives with the company providing the service,” adding that he found the notion of mixing advertising with artificial intelligence — the product his company is built on — “uniquely unsettling.”

    The comment reminded me immediately of something I’d heard before, from around the time I was first getting online. It came from a seminal paper that Sergey Brin and Larry Page wrote in 1998, when they were at Stanford developing Google. They argued that advertising often made search engines less useful and that companies that relied on it would “be inherently biased towards the advertisers and away from the needs of the consumers.”

    I showed up at Stanford as a freshman in 2000, not long after Brin and Page had accepted a $25 million round of venture capital funding to turn their academic project into a business. My best friend there persuaded me to try Google, describing it as more ethical than the search engines that had come before. What we didn’t realise was that amid the dot-com crash, which coincided with our arrival, Google’s investors were pressuring the co-founders to hire a more experienced chief executive.

    Brin and Page brought in Eric Schmidt, who in turn hired Sheryl Sandberg, the chief of staff to Lawrence H Summers when he was Treasury secretary, to build an advertising program.”

    My senior year, news filtered into The Stanford Daily, where I worked, that Facebook, which some of us had heard about from friends at Harvard, where it had started, was coming to our campus. “I know it sounds corny, but I’d love to improve people’s lives, especially socially,” Mark Zuckerberg, Facebook’s co-founder, told The Daily’s reporter. He added, “In the future, we may sell ads to get the money back, but since providing the service is so cheap, we may choose not to do that for a while.”

    Zuckerberg went on to quit Harvard and move to Palo Alto, Calif. I went on to The Wall Street Journal. Covering Facebook in 2007, I got a scoop that Facebook — which had introduced ads — would begin using data from individual users and their “friends” on the site to sharpen how ads were targeted to them. Like Google before it, Facebook positioned this as being good for users. Zuckerberg even brought Sandberg over from Google to help. When an economic downturn, followed by an IPO, later put pressure on Facebook, it followed Google’s playbook: doubling down on advertising. In this case, it did so by collecting and monetising even more personal information about its users.

    Which brings me back to Altman and OpenAI, the parent company of ChatGPT. It started as a nonprofit with a stated mission to build AI that would benefit humanity. After several interim restructurings, OpenAI has now announced that it will create a public benefit corporation (albeit one still controlled by the nonprofit) to serve both the public good and shareholders’ needs while removing a cap on investors’ returns — a change its chief financial officer, Sarah Friar, said “gets us to an IPO-able event … if and when we want to.”

    The stage is set, then, for the next phase of Big Tech’s ever-deepening exploitation of the natural human desire for information, connection and well-being. It’s not surprising, in that context, that Altman and other OpenAI executives are gently floating the prospect of using advertising after all. In December, Friar told The Financial Times that OpenAI is considering it, though she clarified that the company has “no active plans” for ads. Altman mused later about an affiliate revenue model, by which his company would collect a percentage of sales whenever people bought something they discovered through an OpenAI feature called Deep Research. He added, “That would be cool.”

    Altman specified that OpenAI wouldn’t accept money to change the placement of product mentions. Still, it’s not hard to imagine how a new OpenAI might work, combining all the personal information we already share with ChatGPT — marriage troubles, office conflicts — with the billions of words of text OpenAI consumed while building its products, to send us increasingly well-targeted recommendations about what to do with our time, money and attention.

    I doubt we would have to wait long for Altman to instruct us that this is for our benefit. And once ChatGPT went there, it would be a given that everyone else would, too. Google, meanwhile, is already placing ads alongside its AI-generated search results.

    Altman recently claimed that about 10% of the world uses the company’s products. That’s a lot for a company whose products have been publicly available for only a few years — but it also means that 90% of the world doesn’t. Some people are actively resisting AI products like ChatGPT and Google’s Gemini. They, like McNamee, have seen this movie before. This time, they can see what’s coming.

    ©️The New York Times Company

    Vauhini Vara
    Next Story