ICL-Project15-ELP

Back to home

Ethical consequences

Responsible AI - Microsoft Values

Microsoft Definitions

In developing our AI/Machine Learning system, we took into account the ethical values that Microsoft teaches it’s employees, clients and other companies. The 6 pillars Microsoft teach to build AI systems responsibly are:

Incorporation of these ethical AI values into our project

Fairness

Making sure that the system we developed and deployed reduce unfairness in society, rather than keeping it at the same level or making it worse! The way we tackled this pillar was to talk to as many people from different industries as possible in developing our system! We talked to Computer Vision experts, Cloud Advocates, University Professors and very importantly our client who works in the field!

Reliability & Safety

In order to develop a system that is reliable, we asked our client for a various and diverse amount of test data in which we tested on our system. This meant that we can provide statistics and behaviours of our system in different situations to inform our client on it’s performance, but also events they should look out for for the system to perform as it should. Systems not performing correctly can cause harm to a project or business. Our client will look to send out helicopters to capture poachers if they decide to deploy our system in real-time, and therefore it is important that they know the reliability and confidence levels of our system. Sending helicopters is expensive, and the costs can pile up!

Privacy & Security

We made sure to ask our client what we were allowed and not allowed to publicly make available, such as the data we were using, images of the locations of acoustic recorders currently deployed, meeting recordings and private discussions we had about the project. This meant that we could build our system in a secure way, which does not endanger anyone on the project or the forest elephants we are trying to protect.

Inclusiveness

Empowering and engaging the communities that want to use our system is important, and therefore we have written thorough documentation in order to run our system. We have also provided three different models, so depending on the hardware constrainsts or conditions different projects have, they can choose a model that suits them, of course with the trade offs that are listed on each one.

This inclusiveness also means that our system must be able to be used by people who are not fluent in programming or technology hence we designed it in a way where it is easy to run. A main aim of Project 15 is to allow for science and technology to come together in order to solve world problems.

Transparency

Transparency helps to mitigate unfairness and allow an AI system to evolve. You have to to open about the limitations of the system and know about the behaviour of your system and therefore all our code is open source and can be developed upon in the future. We also provide our client with the complete dataset we built that our AI system was trained on.

Accountability

The structure we put in place to take into account our principles. In our project, we are accountable for the system being able to run correctly, and building our system in a correct way and robust way for it to be deployed in the field. We are also accountable to keeping all these ethical and responsible AI values in mind when developing our system.

Consequences of gunshot detection

The act of poaching elephants is illegal. The ban on international trade was introduced in 1989 by CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) after years of unprecedented poaching.

Rangers

A key consequence of our system is that once deployed in the field the national park rangers will be notified when there has been a gunshot and what acoustic recorders/microphones have picked this gunshot up. This means that rangers are putting their lives at risk to catch these poachers who are armed with lethal weapons. From an ethical perspective we have to think about the safety of the rangers, and therefore any rangers or helicopters sent in to catch poachers, from information used by our system must be informed and fully trained to do so, this means that we can ensure safety and not putting anyones life at risk.

Poachers

Since the activity of these poachers is illegal, if caught the poachers would end up being detained and put on trial. The system may be used as a way in which a poacher would be caught and they may end up going to jail, it depends on the law and own opinion how ethical this is.

Sensors

If our system is deployed on IoT devices, we have to make sure to use ethical sensors that do not disrupt the natural rainforest environment and birds nests. Some devices may get hot under increased load, and therefore it is important to make sure this does not hurt animals or cause a fire. Also, if found by an animal the device must not cause damage to the animal, if say, the animal consumed parts of the device.

Users of the system

The users of the system are the ones responsible for the data collection and reporting of gunshots. This can be used in a unethical way, which could include:

It is up to the user to use the system ethically, since unethical use of our system can cause the problem of poaching to get even worse, since the changing of data may mean that rangers are sent to patrol the wrong areas, and sharing of data with poachers may lead to them avoiding an area so they are not caught.