top of page
Writer's pictureDurham Pro Bono Blog

Controversies of Artificial Intelligence

Disclaimer: The views expressed are that of the individual author. All rights are reserved to the original authors of the materials consulted, which are identified in the footnotes below.


In recent years, AI (Artificial Intelligence) has made rapid advances in technology and transforming human lives for the better. Whether it is a trivial or complicated task, humans are seeking advice from or leaving decisions entirely up to AI algorithms. With the advance of AI, related ethical issues are beyond what we can envisage.



One of the major concerns of employees is unemployment. Technologies like AI would transform or even eliminate their jobs. Millions of truck drivers would lose their jobs if Tesla’s Cybertrucks were universally available in the future. A June 2019 research conducted by Oxford Economics estimates that automation would replace 8.5% (approximately 20 million) of jobs around the world by 2030.[1] Although certain skillsets may become redundant, this could be an opportunity to change the nature of jobs and possibly alter our strategies in investing our time.


It is generally believed that intelligence comes from modelling and learning from one another. AI is capable to model human relationships and conversations. A chatbot named Eugene Goostman won the legendary Turning Test in 2014[2]. In this test, participants used text input to communicate with “unknown entitles” for 5 minutes. Participants were asked to make a guess whether they are chatting with human or machine. Eugene Goostman deceived 33% of the participants that he was a living human being.[3] This is a milestone in the history of artificial intelligence, as it was the first time that people interacted with robots as if they were humans. AI devices have unlimited energies to hold conversations or even maintain relationships, while humans have limited patience and attention spans. Machines are also trained to process and identify patterns by following inputs efficiently. However, a study in 2015 has shown that machines would not be able to detect random patterns correctly and identify them as things that are not there.[4]


Despite the fact that AI appears to be beneficial to our daily life, it has its flaws. AI could exhibit bias like humans. For instance, a software programme demonstrated racial bias against blacks when they make predictions for future criminals.[5] Such bias if gone unchecked may lead to security threats. Robots have been widely used in medical industries and militaries. Scientists are using computer algorithms to develop vaccines.[6] Military groups around the world uses autonomous weapons such as the UAV (Unmanned Aerial Vehicles) to replace human soldiers. Cybersecurity is crucial to AI, as it could be hacked and used for iniquitous reasons. Some believe AI could develop aversion mechanisms like prejudice which can threaten our security. It is even claimed that AI algorithms may create vaccines and weapons with the potential to wipe out humanity itself.


Some people may consider AI machines as citizens if they share the same perception, behaviours, reward and aversion mechanisms as human beings do. A Hanson robot Sophie has been granted citizenship by Saudi Arabia in 2017.[7] Whether or not AI should share equal rights and ethics as humans is still an ongoing and controversial debate.


Without a doubt, the development of highly advanced AI could lead to both advantageous and negative consequences. We have to remember that AI is a double-edged sword; whether AI becomes a benefit or threat is up to how mankind decides to use technology.


Helene Woo (Technology and Media)


SOURCES

[1] Oxford Economics, ‘How Robot Change The World’ (Oxford Economics, June 2019)<http://resources.oxfordeconomics.com/how-robots-change-the-world> accessed 5 Dec 2019.


[2] Per Liljas, ‘Computer Posing as Teenager Achieves Artificial-Intelligence Milestone’ (Time, 2014) < https://time.com/2846824/computer-posing-as-teenager-achieves-artificial-intelligence-milestone/> accessed 5 Dec 2019.


[3] Doug Aamoth, ‘Interview with Eugene Goostman, the Fake Kid Who Passed the Turing Test’ (Time, 2014) < https://time.com/2847900/eugene-goostman-turing-test/> accessed 5 Dec 2019.


[4] Anh Nguyen, Jason Yosinski and Jeff Clune, ‘Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images In Computer Vision and Pattern Recognition’ (CVPR, 2015) <https://arxiv.org/abs/1412.1897> accessed 5 Dec 2019.


[5] Level of Service Scales: A Meta-Analytic Examination of Predictive Accuracy and Sources of Variability’ (Psychological Assessment, 2013) < https://www.researchgate.net/publication/258920739_Thirty_Years_of_Research_on_the_Level_of_Service_Scales_A_Meta-Analytic_Examination_of_Predictive_Accuracy_and_Sources_of_Variability> accessed 5 Dec 2019.


[6] University of Miami, ‘Scientists use computer algorithms to develop seasonal flu vaccines’ (Science Daily, 2010) <https://www.sciencedaily.com/releases/2010/07/100709111332.htm> accessed 5 Dec 2019.


[7] Ryan Browne, ‘World’s first robot ‘citizen’ Sophia is calling for women’s rights in Saudi Arabia’ (CNBC, 2017) <https://www.cnbc.com/2017/12/05/hanson-robotics-ceo-sophia-the-robot-an-advocate-for-womens-rights.html> accessed 5 Dec 2019.

49 views0 comments

Recent Posts

See All

Comments


bottom of page