Artificial Intelligence (AI) is, well, controversial. Viewed as one that can bring about great assistance for humanity and one that can end humanity altogether, AI has always been a subject of discussion. There has been consistent divide over this topic. Those in favour of AI include Larry Page (co-founder of 'Google) and Mark Zuckerberg (co-founder of 'Facebook'), and those against AI include Elon Musk (co-founder of 'Tesla, Inc) and Stephen Hawking (renowned theoretical physicist). However, I would like to focus on the aspect that many people wonder-what will happen if AI goes out of hand?

AI and weaponry are perceived in people's mind as a combination of destruction. Maybe rightfully so, I'm not sure. This combination is best illustrated in the Terminator films, and how this very combination has resulted in humanity on the brink of extinction. This end result can be avoided if this combination falls under the right hands. Let us say that this fell under the hands of say, a terrorist group. Imagine what the algorithm the AI is following would contain. Terrorists do not hold back, and we can expect only more destruction in this outcome. Wide-spread.

Let's talk about the age of advanced AI. From what I have observed, AI is in this premature stage. It's nowhere near the stage of what we have seen in the Terminator films (unless there's things we don't yet know, but let's be optimistic), and I personally don't think that it will reach that stage anytime soon. However, because of the prematurity of AI, we have already witnessed a death at the hands of a self-driving Uber vehicle. Many will agree that it's too early for AI to be deployed in such fields. Perhaps by taking baby steps, we can get more time on our hands for development of this form of AI.

Privacy. It's something that we all want. It's essentially a right. However, what happens if there's this program that is under say, a hacker's hands? All your personal data, icnluding pictures, conversations with others close to you and those whom with you have, say business relations, all gone and under the hands of someone completely random; a complete stranger. Then the hacker demands money out of your pocket. Not just a small amount, but a boatload. And if you're unwilling to pay, then expect the hacker to leak all your information. Your life, now public for everyone to see. How would that make you feel? What would be of you?

We know that AI grows on its own over time. It feels its environment and interacts with it. AI evolves, just like us. However, the rate at which AI grows may end up being exponential for the more powerful ones. The more powerful AI can perhaps overtake our intelligence, and grow into something else, something us creators unintended to do.

There's a recurring theme here. AI can fall into the 'right' hands and the 'wrong' ones. Obviously when it falls into the wrong ones, it'll fail. So I think that the question really in the end is whether or not we should monitor human and their development of AI, but this a topic that requires the grasping of various other concepts, such as ethics, which will have to be an article for another day.

0



  0