OpenAI’s Altman Faces Board’s Trust Question as Sutskever Flags AI Oversight Risks
As artificial intelligence accelerates, New York’s reliance on tech giants like OpenAI raises urgent questions about power, trust, and public oversight in the digital age.
On a late autumn afternoon on West 34th Street, as commuters streamed through Penn Station clutching smartphones programmed by invisible algorithms, New York City’s future brushed uncomfortably close to its present. The city—magnet for capital, creativity, and cataclysm alike—now depends on the integrity of faraway Silicon Valley engineers whose names most New Yorkers wouldn’t recognise. But new revelations about Sam Altman, chief executive of OpenAI, should give pause: what happens when the architect of tomorrow’s most potent technology is accused of playing fast and loose with the truth?
In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, confided to others on the firm’s board that he doubted Altman’s fitness to helm such civilization-altering work. He assembled seventy pages of internal communications and human-resources documents to support his worries, alleging that Altman had misrepresented safety protocols and deceived colleagues. Sutskever, who had once counted Altman as a friend, put it bluntly: “I don’t think Sam is the guy who should have his finger on the button.” His concern was shared by fellow directors—including Helen Toner, an influential AI policy analyst, and entrepreneur Tasha McCauley—who, under the precepts of OpenAI’s nonprofit charter, held an explicit mandate to prioritise the safety of humanity above shareholder returns.
The implications for New York are as immediate as they are profound. The city’s economy is stitched together by vast quantities of data and an ever-expanding armada of pattern-matching machines. Its banks run on machine learning. Urban planners crunch transit data to curb congestion. Even the city’s social-welfare programs are increasingly optimised via AI. OpenAI and its competitors sell the algorithms that ingest, organise, and render legible the city’s digital detritus. If the very shepherd of these systems is accused of prevarication or worse, is New York unwittingly anchoring its critical infrastructure to unsteady ground?
At root, the crisis exposes both an old hazard—outsized individual influence—and a new one, particular to the age of autonomous systems. The OpenAI board’s original vision, crafted in 2015 by Altman, Sutskever, Elon Musk and others, envisaged an arrangement that would bind the company’s fortunes to ethical stewards. If AI risked being, as Musk puts it, the “most powerful, and potentially dangerous, invention in human history,” it ought to be helmed by a paragon of transparency and candour. But power, as so often, accrued to the most politically skilled. “The people who end up in these kinds of positions,” Sutskever warned, “are often…someone who is interested in power, a politician, someone who likes it.”
For New Yorkers—plagued by algorithmic errors, opacity, and the unaccountability of global platform firms—the Sutskever memo saga is a lesson in the perils of excessive trust. City Hall and the MTA have both vaunted legacy partnerships with OpenAI and smaller upstarts in the past year to streamline procurement, assess risks, and manage public information. Each contract ties the city ever closer to Silicon Valley and its internal dramas. In a marketplace defined by scale advantages, political intrigue within a single California headquarters now ricochets into Brooklyn classrooms and Bronx hospitals.
The second-order effects are subtler. New York’s robust tech sector, second only to San Francisco in capital raised, risks being eclipsed not because its engineers lack skill, but because access to cutting-edge AI models is controlled by a handful of secretive, venture-backed firms. Local entrepreneurs and universities are increasingly dependent on whatever rules OpenAI unilaterally sets—whether for pricing, APIs, or permissible research. If the firm’s leadership proves unreliable, the city’s broader innovation ecosystem could stall, squeezed between irreproducible code and uncertain licensing. Meanwhile, the prospect of opaque “alignment” work—designed to keep AI from going rogue but itself shrouded in mystery—adds an extra layer of unease for those working in public interest roles.
More broadly, the Altman affair offers an instructive counterpoint to how other global cities and polities reckon with AI risks. The European Union, famously fond of regulation-by-committee, has moved to constrain “high-risk” AI via the AI Act, prioritising transparency and algorithmic audit trails. China has taken a more centralised tack, with tight state control and mandatory data sharing for large models. New York’s approach, like America’s at large, has been to place outsized faith in the wisdom—and scruples—of Big Tech’s vanguard. It is a wager that looks increasingly fragile when proprietary labs resist even nominal outside scrutiny.
When trust falters in technology’s stewards
One might posit that New York’s institutions, adept at political compromise and grizzled by decades of corporate misfeasance, may be less vulnerable than others. Yet the evidence is mixed. The city’s Data Law (Local Law 49 of 2018), requiring agencies to publish their algorithms, is routinely skirted by claims of trade secrecy or by procurement from vendors whose black boxes none may open. The Office of Technology and Innovation, only recently elevated to cabinet level, remains meagrely staffed compared to the scale of its remit.
For the ordinary New Yorker, the imbroglio in OpenAI’s boardroom is easy to dismiss as a distant Silicon Valley melodrama. Yet it bodes ill for those seeking real accountability as artificial intelligence becomes ever more embedded across city life. If the purportedly altruistic structure at OpenAI deteriorates into internecine feuding and cloak-and-dagger intrigue, what hope remains for more ruthlessly commercial players?
Policy remedies abound on paper but have been slow in uptake. One immediate step would be clearer government procurement standards, mandating open audits and alignment disclosures as prerequisites for deploying AI in any city function. City councillors and state senators alike could do worse than mandate algorithmic impact assessments, akin to environmental reviews, before crucial public infrastructure is assigned to any AI vendor. The experiences of European cities suggest these steps are feasible—if less lucrative for the dominant platforms.
Still, the incentives for inertia are powerful. Altman remains in his job; OpenAI continues unabated. Stories of internal dissent are quickly labelled “growing pains” by optimists or “green room drama” by competitors. Meanwhile, the city, like much of America, waits for a reckoning that may never come—or notices only when the next outage, glitch, or unexplained refusal of service exposes how little is truly understood about the machines now running the show.
Trust, we reckon, is the quiet linchpin of modern urban life. If that trust is upended—not by malicious code or hostile actors, but by boardroom bravado and sliding-door secrecy—the consequences could be less explosive, but far more insidious, than any science-fiction fever dream.
New York, restless and pragmatic, should insist that its future is not left to disappear in someone else’s vanishing memo. ■
Based on reporting from News, Politics, Opinion, Commentary, and Analysis; additional analysis and context by Borough Brief.