12 Comments
User's avatar
Devesh's avatar

This maps to something I've seen firsthand in production. We run an AI extraction system for a consumer products business, and the adoption pattern was exactly trust-network shaped — not top-down rollout, not feature-driven.

The system went from 'interesting demo' to 'daily dependency' because one domain expert started using it, got good results, and told three colleagues. Those three told their teams. Within two months we had organic adoption across departments that no amount of training sessions had achieved.

The interesting wrinkle: the trust wasn't in the AI. It was in the person who vouched for it. When the original champion moved to a different project, adoption in her old team actually dipped until someone else stepped into that trust node role.

The billion-user question isn't 'how good is the model' — it's 'who's the person in each trust network that makes the introduction.' That's the GTM challenge this piece gets right.

Sam Weinstein's avatar

Dealing with this now, people problems. Trust. Educating. Distribution.

8Lee's avatar

One word that was missing from this entire piece was "quality" — which is a building block for trust in a world where the lack of quality is so easily created. And while quality is a sliding scale, it's one of the things that is a bit like "you know it when you see / feel / taste it".

There is Apple-level design patterns and everyone else. We all know what this means. No one vibe coded the beloved ergonomics and form factors that we use every single day. We don't see the signatures inside the paneling but we know they are there.

Michael Albers's avatar

Trust effects not network effects" is the right framing, it runs deeper than the sovereign layer.

The same instinct driving nations to demand sovereign compute is what makes individual operators build access controls before handing their AI anything personal. Read-only first. Explicit approvals. Access that expands with a track record. Not paranoia — ownership. Ownership of your data and sensitive information.

The wall isn't just between countries and foreign models. It's between individuals and the tools they're letting into their most personal contexts. Both are trust problems, not capability problems.

I just wrote about the individual version of this yesterday and how I'm solving that: https://medium.com/@malbers/i-built-an-ai-system-it-had-to-earn-its-access-9b4d792c7e9d

David Mensah's avatar

Sakina, https://www.boxrewards.shop/mission. Is how I have put to action much of what you have written. Boxrewards as a whole is an AI Social Commerce Network for People. But each individual element of (1)Product Discovery, (2)Payments & (3)High Level Customer service/Attention is separately enabled to cater to Agents/Agentic AI.

Ramesh Raskar's avatar

Thanks Sakina .. follow https://digidoot.in for how the trust architecture could be built for a billion+ people

Walter Johnson's avatar

"Arsiwala’s 'Trust Wall' is a roadmap for the institutionalization of the post-truth era. By prioritizing 'specific realities' to gain market share, the tech industry isn't just localizing technology; it is subsidizing the fragmentation of objective truth, if not a post-modern obliteration of an objective truth. This piece should be viewed as alarming rather than wise guidance, precisely because it ignores the catastrophic downside of building AI systems aligned to reflect 'specific realities.'"

There is only one reality. "Specific realities" is often a euphemism for the deliberate twisting of facts. We see this in the persistence of thin-evidence claims—such as the debunked link between vaccines and autism—where "local reality" is used to justify harm over public health. This alignment solution is destined to reinforce national ideological control. It provides a toolkit for Modi’s Hindu nationalism to further marginalize religious minorities, for Iran’s religious extremism to codify its geopolitical erasures, and for Israel to build automated justifications for religiously based repression and expansion.

In the West, "specific realities" will simply provide digital armor to the religious right, the progressive left, and Christian nationalists, deepening existing fissures. Furthermore, within these silos, "local realities" will grant unprecedented power to "influencers" who will demand their own aligned AI agents. These figures are already primary drivers of misinformation; giving them the ability to deploy hyper-personalized, "aligned" AI agents will make today’s concerns about social media echo chambers seem quaint by comparison. We are moving toward a world where the "Trust Wall" is actually a prison of our own curated delusions.

User's avatar
Comment removed
Apr 19
Comment removed
Walter Johnson's avatar

Don’t have WhatsApp. Who are you?

JM Ahmed's avatar

Love your Trust Wall framing, Sakina! A survey of 48,000+ respondents across 47 countries, 60 % of people in emerging economies trust AI, compared with 40 % in advanced economies — yet 58 % still regard AI as untrustworthy.How should AI platforms measure and optimize for trust adoption rather than just usage, especially in high-friction markets where trust is fragile?

Clint Cain's avatar

"...trust remains local" as a builder, this is my primary concern.

Almost everything I built now starts with being local, desktop and server applications maybe on it's way back with a reduction in SaaS app—not services.

I think about working in teams and companies, and why they're not adopting AI as fast and it's for every reason you have described here.

This is a wonderful article. It truly explains the dynamics of how human scale systems.

I think about myself using claude code every single day and even though I use it and even though I'm supposed to trust it, I still have concerns—I mean, it can run everything as me.

My urge is to still download every local model that my computer can handle, makes me feel better lol.

When it comes to building an app for clients, the biggest questions is, can they trust AI? They have intellectual property, financial data, and proprietary information that needs to be secured. And if they don't trust the "entity" building a system with AI, then there will be no progress.

This is a great call out—to see the big platforms coming together on one stage showing that maybe they all can work together—can be an illusion for force trust for global adoption, I agree.

I like the idea about having a universal built-in machine level AI, that is your buddy, but I also agree, that probably won't happen because well, capitalism.

Sabyasachi B.'s avatar

The YouTube analogy is doing real work here. The GEMA blackout in Germany and the Pakistan internet shutdown are exactly the kind of examples that make the Sovereign Wall feel concrete rather than theoretical. What strikes me most is the Trust Latency concept — in markets like India, the cost of being early is borne by a very small group of trusted intermediaries, and the mass adoption only follows after those nodes have already absorbed the risk. The M.A.N.A.V. framing from PM Modi signals that India is positioning itself not as a passive consumer of AI but as an architecture-setter. Whether frontier labs can build Sovereign Vault models that actually satisfy that demand without fragmenting global capability is the key question for the next two years.

Sabyasachi B.'s avatar

India is probably the richest live case study for everything this piece describes. The M.A.N.A.V. framing Modi rolled out at the AI Impact Summit isn't just diplomatic positioning ..... it's the government signalling that it will play Trust Broker at national scale through infrastructure like BHASHINI (multilingual AI across 22 scheduled languages) and the IndiaAI sovereign cloud that now serves 500+ government entities and processes 15 million inferences daily. The key test is your point about Trust Latency. India's approach is to route AI through institutions people already trust ..... state governments, Jan Aushadhi pharmacies, gram panchayats ..... rather than expecting a billion users to adopt frontier apps directly. Whether that intermediary layer can move fast enough before direct-to-consumer apps lock in urban users is the tension worth watching.