With the rise of AI, an old debate returns. Are machines truly neutral, and only humans are responsible for what happens next?
I believed it. I believed that technology is a blank instrument, waiting for human intention to give it direction. But I after reading a bit on Bruno Latour (who states that the moral weight lies in the human / technology relationship, not in either alone) changed my perspective.
🔫 A gun is designed to kill efficiently.
💣 An atomic bomb is designed to destroy entire cities.
😶 Facial recognition is designed to identify and track people.
Their purpose is built into their architecture and therefore, they carry a moral dimension. Every design decision matters.
For example, the social media algorithms we discussed before. Not a lot of people wake up, wanting to divide people or erode attention spans. But the underlying logic (attention equals profit) inevitably rewards outrage, polarisation, and impulsive behaviour.
Division is not a side effect; it’s a system behaving exactly as designed. Morality lives in systems and technologies. Once deployed, it reshapes human behaviour.
And AI.....?
It’s not conscious or alive (😆 it really isn’t).
AI systems don’t “choose” to be manipulative, discriminating or unfair. AI systems do as instructed (like the gun). That’s why ethics cannot be an afterthought.
But it’s built to learn from us (from our language, our data, and our collective digital past).
That means it does not just reflect human intelligence; it reflects human bias, desire, and fear. Especially considering our online behaviour is not necessarily our best behaviour....
Honestly, I have no clear idea where we’ll end up.
➡️ Will AI development not become fundamentally self-referential... Systems improving systems, until nobody understands what’s being optimised?
➡️ Will we become like the Borg from Star Trek?
Perfectly efficient, connected, and stripped of individuality?
Time will tell. But one thing is for sure. Technology is not without morals.
Peace out!