CES Closes. China Leads. Boston Signs the Waiver.
San Francisco | January 9, 2026 CES 2026 wraps today in Las Vegas, and the floor plan told the story before
Switzerland's new open-source AI model Apertus tackles what kills 95% of AI projects: lack of transparency. Full visibility into code and training data promises to solve bias, compliance headaches, and corporate trust issues.
"The Swiss approach aims to regulate AI in a way that strengthens Switzerland as a location for business and innovation while keeping societal risks as low as possible. It’s like they’re trying to have their cheese and eat it too—and knowing the Swiss, they might just pull it off."
This quote from my book Artificial Stupelligence has just found its perfect match in Apertus, Switzerland’s new open-source language model. It is the latest expression of the Swiss knack for precision, transparency, and careful balance.
But what does Apertus really mean for business?
Companies hate black boxes, especially when the stakes are high. Think hiring algorithms or loan approvals. As illustrated in my book, Amazon’s hiring bot crashed and burned because it hid biases that nobody could fix. Banks have faced backlash for unfair loan denials driven by hidden prejudices in AI.

Apertus is different. It’s transparent. Every piece of code, training data, and decision can be inspected. That means businesses can spot and fix bias early. Thanks to Apertus’ full transparency, from source code to training data, companies gain clear insight into how the AI works. This means they can confidently meet regulatory requirements without second guessing. Better visibility also helps avoid costly surprises like fines, PR headaches, or legal battles.
Think of Apertus like a Swiss watch: precision-built, piece by piece, with total clarity. Its development is handled by a tight team of experts, regulators, and developers. They regularly inspect and fine-tune every part, ensuring reliability and ethics without chaos.
Unlike Wikipedia, where anyone can edit, this is a carefully controlled process. It’s openness matched with discipline, giving businesses a trustworthy AI that won’t unexpectedly fail them.
Amazon’s hiring bot offered a cautionary tale: hidden biases led to unfair rejections—and a PR nightmare. With Apertus, companies can finally see inside the engine. They control the data and can root out problematic biases. Decisions are explainable and auditable, not mysterious. That reduces risk, improves fairness, and satisfies tightening regulations.
Banks face similar challenges. Opaque loan decisions have triggered accusations of discrimination. Apertus lets banks own their AI training data and audit every decision. Its multilingual reach helps account for local nuances, boosting accuracy and fairness. For financial firms juggling compliance and trust, the payoff is huge.
Apertus also addresses something most big models ignore—copyright and content ownership. By respecting opt-outs and carefully managing training data, it helps companies avoid the risk of training on stolen or unlicensed content. This means less chance of costly copyright fights and more respect for creators' rights. In business, that’s a win for brand integrity and peace of mind.
MIT’s recent headline statistic looms large: 95% of AI projects either fail or fail to deliver. The causes? Lack of trust, regulatory barriers, bias, and ill-fitting tech.
Apertus isn’t the fastest or flashiest AI around. But it’s built to address exactly those pain points businesses wrestle with:
For companies tired of betting on black-box AI roulette, Apertus offers a seat at the watchmaker’s bench—a chance to build AI projects that last.
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.