This question is not fundamentally about states or governments, but about a kind of inevitability within development itself—namely, the desire for control, the need for security, and the fear of losing it.
These factors do not need to be eliminated, but the key issue is whether they dominate decision-making.
So the answer follows:
if a person in power makes decisions primarily driven by fear or emotionally shaped by their personal experiences, then they are not a suitable holder of power.
This does not require abstract debate to determine.
It can be observed through their past behavioral patterns—
the issue is not whether emotions exist, but whether decision-making is structurally dominated by them.
Secondly , who is fit to hold power ultimately depends on whether the person or the governing structure is aligned with the original purpose for which the technology was created.
That is: if a technology originates from an individual’s creative or technical breakthrough, this does not mean that those in political or state systems must be the ones in control;
but at the same time, it also does not mean that the creator of the technology is necessarily suited to hold power.
The ability to create technology is not inherently the same as the ability to govern its societal impact.
Therefore, the question should not be simplified into “who gets to control,”
but rather extended to ask whether the governance structure is aligned with the nature and purpose of the technology itself—rather than defaulting to existing power structures or identity categories.
I may be a bit direct here, but it seems to me that this article is built on several assumptions that haven’t really been questioned.
It takes for granted that technological development is inevitable, that control is inherently justified, and that power will naturally concentrate.
Within that framework, the question of “who gets to control” becomes the only question left—but that feels more like a constrained set of choices than genuine inquiry.
In many cases, what appears “inevitable” is just a retrospective explanation of a particular path, rather than evidence that it was the only possible one.
who is more suitable get to control.
This question is not fundamentally about states or governments, but about a kind of inevitability within development itself—namely, the desire for control, the need for security, and the fear of losing it.
These factors do not need to be eliminated, but the key issue is whether they dominate decision-making.
So the answer follows:
if a person in power makes decisions primarily driven by fear or emotionally shaped by their personal experiences, then they are not a suitable holder of power.
This does not require abstract debate to determine.
It can be observed through their past behavioral patterns—
the issue is not whether emotions exist, but whether decision-making is structurally dominated by them.
Secondly , who is fit to hold power ultimately depends on whether the person or the governing structure is aligned with the original purpose for which the technology was created.
That is: if a technology originates from an individual’s creative or technical breakthrough, this does not mean that those in political or state systems must be the ones in control;
but at the same time, it also does not mean that the creator of the technology is necessarily suited to hold power.
The ability to create technology is not inherently the same as the ability to govern its societal impact.
Therefore, the question should not be simplified into “who gets to control,”
but rather extended to ask whether the governance structure is aligned with the nature and purpose of the technology itself—rather than defaulting to existing power structures or identity categories.
We’re all going to default to the power structure where we live and we all live somewhere.
As for does the power governance structure align with the technology… this is past presumptuous.
I may be a bit direct here, but it seems to me that this article is built on several assumptions that haven’t really been questioned.
It takes for granted that technological development is inevitable, that control is inherently justified, and that power will naturally concentrate.
Within that framework, the question of “who gets to control” becomes the only question left—but that feels more like a constrained set of choices than genuine inquiry.
In many cases, what appears “inevitable” is just a retrospective explanation of a particular path, rather than evidence that it was the only possible one.