> We invent machines to free ourselves from labour
That's a very romantic view.
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
I would hope that people realize that money in itself is merely digits on a computer and that the real power of this stuff belongs to the people since the AI inherited and learned from us.
I know that's a simplification but we uphold this contract that controls us. The people get to decide how this plays out and as much as I'm hopeful we excel into a world that is more like star trek, that skips over the ugly transition that could succeed or fail to get us there.
But we aren't that far off of a replicator if our AI models become so advanced in an atomic compute world they can rearrange atoms into new forms. It seemed fiction before but within reach of humanity should we not destroy ourselves.
Our moral and political development severely lags our technological development. I have very little confidence that it will ever catch up. Looking back over the post-WW2 era, we have seen improvements (civil rights, recognition of past injustices, expansion of medical care in many countries) but also serious systemic regressions (failure to take climate change seriously, retreat to parochial revenge-based politics, failure to adequately fund society's needs, capture of politics and law by elites).
My main concern about AI is not any kind of extinction scenario but just the basic fact that we are not prepared to address the likely externalities that result from it because we're just historically terrible at addressing externalities.
That's a very romantic view.
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.