degree in Computer Information Systems, Rodney Don Holder is familiar with the unresolved dangers posed by advances in AI.
But AI development is not necessarily all rainbows and sunshine. There are some critical holes in the long-term planning by software developers and their employers. Here is a quick guide to the moral responsibility of AI developers, as well as a few dangers for which these developers are already seeking solutions.
Today, AI controls major functions in healthcare, defense, power grids, and information. These functions are especially helpful to society today. AI is also beginning to emerge in personal transportation (i.e., self-driving cars).
Naturally, this kind of control placed out of human hands also poses some risks. If a malevolent individual were to gain access to the “controls” behind many of these functions, terrible things happen for users (i.e., human beings).
When using bank or wealth-related software, AI allows users to access their finances and personal information with great ease. This increases the effectiveness of banks, healthcare facilities, and more. Smart homes allow homeowners from anywhere in the world to verify that certain individuals (such as delivery drivers, friends, family, etc.) are safe to enter parts of the property.
A great deal of software engineering goes into cybersecurity and other safety measures to protect the integrity of AI. Rodney Don Holder is himself a consultant in the world of cyber security and can help many clients increase their own security to protect themselves against cyber-attacks.
In other words, are current developers fully aware of AI’s potential? Many admit that there is little way to know the full extent of benefits available to humanity through AI. But what about the cons? If it is true for one end of the spectrum, it must certainly be true for the other end, as well.
Destructive End that is Not Yet Apparent to Programmers
In the computer coding language, Ruby, programmers create an AI process where they define parameters and set an end to that process once a certain desired end is reached. However, Ruby students often struggle to remember to “end” a ruby process, such as a while loop. Not only students, but programmers breaking ground into deeper levels of coding languages are learning new things by trial and error as they discover new techniques.
All that to say, a team of software engineers build a program intended to solve a certain problem. But what’s to say that they were careful to close off every piece of the code’s complexity so that the intended deliverable of the program does not surpass its intended goal? In truth, Rodney Don Holder knows that these errors do come up often for programmers, and for the most part, organizations are careful to find these major bugs before releasing said software for public use.
Destructive Means that are Not Yet Apparent to Programmers
You’re probably not an evil ant-hater who steps on ants out of malice, but
if you’re in charge of a hydroelectric green energy project and there’s an
anthill in the region to be flooded, too bad for the ants. A key goal of AI
safety research is to never place humanity in the position of those ants.