Tech

AI Ethics: Can We Use This Technology Responsibly?

By Sydney Butler / August 30, 2019

Knowledge by itself is morally neutral. Take atomic science as an example. You can use it to do constructive things, such as making medical devices or atomic power stations. You can also use that same knowledge to build a bomb. It's a human being who decided how. Choosing to apply knowledge in ethical or unethical ways. So what about AI ethics?

AI ethics are no different. Artificial intelligence is already a field that has yielded some incredibly powerful tools. Just looking at the short- to medium- term, it’s clear that AI is set to become the defining technology of the 21st century. It’s not surprising either. The complexity of our world, our society, and our problems exceed what mere human intelligence can do. Tough issues in medicine, the environment and the mere management of day-to-day life will either be impossible or needlessly difficult without the help of artificial intelligence.

There are already preemptive movements that aim to ban AI. At least certain variants of it. However, A field that yields such useful tools is unlikely to be snuffed out. The best we can do is think cogently about how to regulate and apply AI technology. Whatever technology we create and use, it stands to reason that it should cause minimal harm and be of great benefit. Except, how would that work?

Biases in AI

One of the most powerful aspects of Artificial Intelligence comes from Machine Learning (ML). This describes the body of methods that allows a digital system to come up with solutions that the programmers themselves did not have.

In short, the software is fed information, which it then uses to create algorithms that can be used to solve complex problems rapidly. Examples of such problems include how to recognize a face or predicting criminal behavior.

Despite AI being a powerful and sophisticated approach to computing, the basic principle of “garbage in, garbage out” still applies. In other words, the algorithms produced through ML techniques are only as good as the data it learns from. If you feed it biased data, it will develop biased solutions.

If you just think about this for a minute, it should be clear that this is a major problem. We’re already seeing bias when it comes to the facial recognition of people with darker skin. What about medical algorithms that overlook women or people from minority communities? This is such a significant problem that we have organizations such as the Algorithmic Justice League who are working actively to raise awareness of AI bias.

Ethical Decisions by AI

If you want to use AI in an ethical way, then the AI itself must behave in an ethical way. That stands to reason, right? That's easy enough to say, but actually implementing it is an entirely different problem.

Think about a situation where a car is going to be in an unavoidable accident. No matter what happens, someone will die or be terribly injured. Different human drivers are probably going to make different decisions in this situation. Most people will probably try to preserve their own lives and that of their passengers in most cases. If someone's about to run over a dog, a child or some other particularly vulnerable victim, they might choose to swerve out and crash. Hurting themselves instead. People often have to react without much thought in these situations. They don't have the time to think about it philosophically and come to some sort of morally acceptable solution.

For an AI driving that same car, it has all the time in the world. So the real question is what sort of ethical rules its creators should build into it. The problem is that the view on what is or is not ethical is hardly universal. Some people would say you should try to save the greatest number of lives. Other's say that the AI should prioritize its own passengers. There's another angle to it as well. Would you want to ride in an AI-controlled car that might choose to sacrifice you for the greater good?

It might not be rational, but people prefer the illusion of control. Humans feel safer when in control of a vehicle, despite it actually being less safe than having a machine drive. That's why it has been better to automate driving stealthily through technologies such as collision avoidance systems. The driver thinks they are in control until things go pear-shaped. Then the automated systems kick in. Full automation, especially where life and death decisions have to be made, are going to be both a tough sell and a hard technical problem to solve.

Will AI Make Us Free or Obsolete?

This is one of the most common questions, and the short answer is that no one really knows what sort of impact AI will have on society. Technological advancement transforms us. Whether that change is positive or negative is a matter of perspective and context.

One thing that’s almost certain is that AI solutions are going to take over a lot of jobs that are currently being done by people. The simpler the job is, the more likely that a machine will do it. This isn’t a new trend at all. We’ve been through several industrial revolutions that have shifted the labor force in one direction or another.

These days, for example, a small percentage of people work in agriculture. Tractors, combine harvesters and other mechanized technologies have eliminated the need for massive human labor forces. Now AI is set to eliminate the small number of people needed to operate these devices. Self-driving tractors are already working in the fields. Drone maker DJI sells the Agras drone, which can automatically spray pesticide over as much as ten acres per hour.

People in complex, creative jobs (such as writers!) can’t get too comfortable either. There are many examples of AI applications generating content that would pass as human-written. Especially in highly structured fields such as financial market reporting or sports.

In the long term, towards the end of this century, it’s hard to imagine any general knowledge or physical task that machines won’t be capable of. While no one has perfect answers about the future, we’ll have to rethink everything. From the basic scarcity principle of economics to just what humans are meant to do with their time on Earth. After all, the idea that we need to work in order to live isn’t an objective truth. It’s a side effect of scarce resources. Proponents of post-scarcity economics generally think that AI is a key technology to make it all work.

Privacy in the Age of AI

There are many fictional stories of hyper-competent characters that can gain impressive insight into the private lives of the people they meet. Sherlock Holmes, from the stories by Sir Arthur Conan Doyle, is possibly the best known example. Holmes would notice some small detail about someone and then use his vast general knowledge to create theories about what those details mean. In the books he comes up with impressive insights, leading to the conviction of villains who would otherwise have gotten away with their crimes.

But, what if you aren’t a villain. What if you are just someone trying to live your life in peace. Would a Sherlock Holmes who correctly deduces private facts about your life publicly be welcome?

This is one of the ethical issues when we consider the power of Big Data and the learning technologies that can gain insight from that data. Even if you don't disclose intimate facts about yourself, AI can join the dots.

The worst part is that future AI can be applied to data collected in the past. So information about you that is anonymous now can be de-anonymized one day. It's like a ticking time bomb.

The other big AI privacy issue is that current AI technologies, such as fluent voice recognition, enables mass surveillance that would never be possible using human agents or traditional software.

In the end, the only thing that can mitigate AI privacy breaches is regulation and legislation. Think of phone tapping and other current interception techniques. If the government wanted to, it could tap your phone. It's the legal system that prevents it from doing so at will. We'll need the same sorts of legal protection against using AI on historical data or in mass surveillance.

Treating AI Ethically

There’s one interesting question around AI that’s, for now, a purely academic one. That question relates to the ethical treatment of AIs themselves. In other words, at what point should we care about the rights of AI? As I have said elsewhere, the sort of “Strong” AI that would fall into the same sentience ballpark as human beings don’t yet exist. We have no way of knowing if humans will ever figure out how to create conscious, self-aware intelligence.

Even the most sophisticated AI applications of today don’t have the architectural complexity of “simple” animals such as insects. It's also not clear at what point the mistreatment of AI entities will become an ethical issue. How do you determine if an intelligence has the capacity to experience subjective suffering?

Since sentient AI is on no foreseeable AI development road map. this might seem like a academic question. But it actually matters today already, because as a society we haven't even figured out the ethics of how we treat natural intelligences ethically. There are animals, such as dolphins and chimpanzees, who are close enough to us that some people think they should be granted human-like personhood.

Then again, pigs are incredibly intelligent, yet we farm and slaughter them by the millions for food. We are going to have to do a lot more thinking about where the boundaries of cruelty and sentience are. The problem is that there are no objective right and wrong answers here. Morals and ethics always have some degree of gray area. Agonizing over when AI reaches the point where we have to worry about cruelty is probably counter to their usefulness and profitability. How future AI developers handle this issue will be interesting, but no one alive today is likely to see the outcome.

Military AI Applications

robot android women

Movies that warn us about "rogue AI" have almost always centered on robotic killing machines. This is the nightmare Skynet scenario, where machines that can launch nukes and send hunter-killers come after us all.

The good news is that this is, for the foreseeable future, confined to the realm of science fiction. Super-intelligent strong AI doesn't exist. We don't know how to create them and it's unlikely that we'll just spontaneously create one.

Which does nothing to take away from the ethical worries about AI killing machines. Major world military powers have been using automated weapons platforms such as the Predator drone or MAARS robots for decades now.

So far, there has always been a human decision maker whenever a decision to take lives is to be made. However, that's changing. There's a strong international push to let autonomous systems select and attack enemy targets without asking a human first. It makes sense, since adding that sort of delay weakens the weapon's effectiveness, but the ethical issues are myriad. There's already a large counter-movement that wants us to ban "killer robots". Before they ever enter service in the armed forces. Then again, maybe future wars will be fought with nothing but machines on both sides. Is that better? Perhaps.

Removing Humans from the Loop

There's a major tension between letting AI technology reach its full potential and keeping human accountability in that process. Many AI systems, especially military ones, will work best if there are no people slowing them down. If you have to stop to ask a human to review every important decision, then you can only operate at the speed of human decision making. This seems a little counter-productive. Yet, if we don't have some form of human oversight over AI decisions then we run the risk of AI making decisions that aren't in our best interests.

I can offer no real solutions to this problem. In the end, we have to develop an accurate sense of trust when it comes to our AI technology. The same way we gauge which people human beings to trust with those decisions today. Times change and although people can't imagine machine intelligence making life or death decisions today, a future iteration of the technology could set everyone at ease. Either way, when someone wants to take the human out of the loop, serious reflection is necessary.

Building Malicious AI

Virus

While most people seem to be worried that our AI technology will "go rogue" and turn on us. Frankly, that's a fantasy as things stand today. What's not wishful thinking is a human-AI creator deliberately using AI technology to do damage.

There are different ways you could approach this, but AI can be directly weaponized without a doubt.

First, think of AI systems that work as independent hackers. They roam the net, looking for weaknesses in security systems. Once they find one, the mission is carried out. Whatever it may be. DO you think this idea is far-fetched? Then consider that AI software has already been written that can come up with a hypothesis, design an experimental test, perform it and then draw a conclusion. This was back in 2009!

That’s not far off from the basic approach that hackers use. They observe, formulate potential hacks and then test them.

Of course, we already have lots of smart malware out in the wild already. However, these traditional malware don’t learn new behaviors and change in unpredictable ways. At least not by design. Advanced AI malware might be a scourge only other AI can handle.

Mitigating the Dangers of AI

So what can we do to get some measure of control over the potential threats posed by AI technology? The answer to that largely depends on where AI is to be applied. After all, the potential for harm is connected to the priority of your intervention.

The most important mitigation would be to ensure that there's some form of robust and secure off-switch. As long as a human hand can still pull the plug. Then we have a good measure of safety and a way to act as the final ethical arbiter for AI use.

Perhaps the most interesting mitigation might be to essentially set a thief to catch a thief. In other words, it might be a good idea to use AI tools to independently monitor what other AI systems are doing. Such a far fetched idea. If you think about it, your own body consists of multiple semi-independent components. They all talk to each other, but each organ does its own specialized job. Using systems of self-regulating AI may be one way to prevent unforeseen outcomes.

Growing Up as a Species

If anything is clear, it's that there's no way to stick AI technology back into the box. Even today, techniques such as Deep Learning are paying off big time. We need AI to solve some of our toughest issues. From environmental modeling to automated robot labor, without AI none of it is possible. I expect that we'll be as dependent on AI in the future as we are today on medicine, fossil fuel and the other technological marvels that make daily life possible. However, when it comes to the ethical use of AI, our species still has some growing up to do.

What's your take on the ethics of AI? Worried? Excited? Let us know down below in the comments. Lastly, we’d like to ask you to share this article online. And don’t forget that you can follow TechNadu on Facebook and Twitter. Thanks!



For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: Chrome, Edge, Firefox, Safari