Rodney Don Holder’s Guide to the Moral Responsibility of AI Developers

Artificial Intelligence (AI) is great. It powers Google to answer everyone’s questions in seconds. Further, it also makes vital transactions take place in lightning speed, as well as increase traffic safety through automated traffic lights and emergency braking in vehicles. As a cyber security expert with a
degree in Computer Information Systems, Rodney Don Holder is familiar with the unresolved dangers posed by advances in AI.

But AI development is not necessarily all rainbows and sunshine. There are some critical holes in the long-term planning by software developers and their employers. Here is a quick guide to the moral responsibility of AI developers, as well as a few dangers for which these developers are already seeking solutions.

There is no doubt that AI makes human life more convenient. It makes tedious tasks more efficient which helps people spend more time on other creative solutions and developing greater emotional intelligence. Here are just a few areas where AI is already making enormous headway.

Controls

Today, AI controls major functions in healthcare, defense, power grids, and information. These functions are especially helpful to society today. AI is also beginning to emerge in personal transportation (i.e., self-driving cars).

Naturally, this kind of control placed out of human hands also poses some risks. If a malevolent individual were to gain access to the “controls” behind many of these functions, terrible things happen for users (i.e., human beings).

Verification

When using bank or wealth-related software, AI allows users to access their finances and personal information with great ease. This increases the effectiveness of banks, healthcare facilities, and more. Smart homes allow homeowners from anywhere in the world to verify that certain individuals (such as delivery drivers, friends, family, etc.) are safe to enter parts of the property.

Security

A great deal of software engineering goes into cybersecurity and other safety measures to protect the integrity of AI. Rodney Don Holder is himself a consultant in the world of cyber security and can help many clients increase their own security to protect themselves against cyber-attacks.

Counter to the paranoia expressed by a neighbor wearing an aluminum foil hat, the dangers behind AI are not that these algorithms would suddenly come to hate humanity and seek its annihilation. Rather, thought leaders in AI fear that unintentionally harmful ends and/or means of AI tasks accidentally harm or destroy large portions of humanity.

In other words, are current developers fully aware of AI’s potential? Many admit that there is little way to know the full extent of benefits available to humanity through AI. But what about the cons? If it is true for one end of the spectrum, it must certainly be true for the other end, as well.

Destructive End that is Not Yet Apparent to Programmers

In the computer coding language, Ruby, programmers create an AI process where they define parameters and set an end to that process once a certain desired end is reached. However, Ruby students often struggle to remember to “end” a ruby process, such as a while loop. Not only students, but programmers breaking ground into deeper levels of coding languages are learning new things by trial and error as they discover new techniques.

All that to say, a team of software engineers build a program intended to solve a certain problem. But what’s to say that they were careful to close off every piece of the code’s complexity so that the intended deliverable of the program does not surpass its intended goal? In truth, Rodney Don Holder knows that these errors do come up often for programmers, and for the most part, organizations are careful to find these major bugs before releasing said software for public use.

But as AI becomes more intricate and complex, it is getting harder to foresee how some ends to a software process may be greater than the engineers intended. This is especially true as experts try to comprehend AI self-development. That is, AI begins to possess the will to seek ongoing improvement as it sees fit. As AI gains more emotional intelligence, experts do not fully understand the destructive potential that this can have on civilization.

Destructive Means that are Not Yet Apparent to Programmers

What about circumstances wherein the end is clear and all “while loops” are properly ended? There remains the possibility that the AI program uses means that cause harm in an effort to arrive at that end. Rodney Don Holder concludes that a helpful metaphor by Future of Life Institute illustrates the importance of being careful:

You’re probably not an evil ant-hater who steps on ants out of malice, but
if you’re in charge of a hydroelectric green energy project and there’s an
anthill in the region to be flooded, too bad for the ants. A key goal of AI
safety research is to never place humanity in the position of those ants.

read original article here