Is Public Fear Slowing Down Tech Development?

advice Image

Imagine a world without drivers, without lawyers, without radiologists. Imagine a world where there’s no need to date to figure out if you’re compatible with someone, or where you don’t need to vote because a machine can read your thoughts and do it for you. Imagine a world where we live our lives through cyborgs. While all of this is certainly still ‘out there’, it’s a future that’s not as unrealistic as it once was.

Artificial intelligence is coming, and no one can stop it. In fact, AI is actually all around us right now. When Netflix recommends shows based on what you’re previously watched? That’s AI. When Google autocompletes your search? That’s AI. When your credit card provider calls to confirm a purchase? That’s AI. And the potential for AI is outstanding… unless it’s stopped in its tracks by fear of the general public.


Genetic Engineering 2.0

AI is an area that’s still largely unknown, and right now, it’s exciting and terrifying in equal amounts. And this is certainly not the first time that we’ve found ourselves in such a complex situation. AI can be considered to be the next genetic engineering. After all, look at the potential that genetic modification has: it’s been used to create vaccines, to treat infertility, to enable doctors to learn more about cancer, diabetes, and Parkinson’s disease. But it’s also been used to clone sheep, create glow-in-the-dark cats, and design salmon that grow faster than you can say ‘what’s for dinner?’ It’s an ethical nightmare.

In terms of artificial intelligence, there is a whole can of ethical worms just waiting to be opened. Research group McKinsey estimates that 78% of predictable physical work, and 25% of unpredictable physical work can be undertaken by machinery in the future. And then we have questions such as how profits from machine-undertaken work will be distributed, how AI will impact behaviours, communications, and society... how we can be sure that AI won’t make some horrific mistake? Can we keep machines safe from hackers? Is it possible to completely eradicate AI bias? Will robots have rights?


Addressing Public Fear

When we look at these sorts of questions, it’s easy to see exactly why fear is sweeping across the nation. Professor Jim Al-Khalili, President of the British Science Association, even throws another fear factor into the mix: will AI end up uncontrolled and unregulated? Right now, anything is possible.

Professor Al-Khalili claims that AI development will fail to continue at the predicted rate unless there is greater transparency around the subject. The professor calls for AI, as well as other disruptive technologies, to be included in school curriculums, and for organisations to be more open about how AI works in a bid to make this futuristic technology more understandable and, ultimately, more normal. We are already seeing transparency becoming more of a focus in the AI world, with Elon Musk creating the OpenAI research company, and Microsoft, IBM, and Google getting together for Partnership on AI.

The fear of the unknown is definitely something that can be seen (and felt), as more and more AI applications are worming their way into everyday life. Whether this fear impacts future development is yet to be seen.


  • Want More? Sign up to our weekly newsletter

Related Articles

advice Image

Micro-mobility? - Upgrading your travel to...

by James Veale A new form of transport that has been advancing over the last 20 years; electric micro-mobility is made up of electric bikes, scooters and skateboards. This article will...

advice Image

UK Economy Grows 0.5% in Q1 2019 Amid...

The UK economy experienced an unprecedented boost to the start of the year, with GDP increasing by 0.5% during Quarter 1 2019. While this is certainly a drop from the more substantial growth...

Sidebar Add Space