NOTIFICATION
TROLLEY
SIGN IN

Beta

Welcome to the first part of the new NCVO website. While we finish building it, you will find the rest of our help and guidance on our existing site.

Understanding responsible and ethical use of technology

Use this page to understand why you need a plan to manage ethical risks when you use technology.

Technology comes with risks and benefits

Everyone who creates digital services, provides activities or builds digital tools must pay attention to the ethical impacts this has on others.

This means thinking about the impact on:

  • the community you serve
  • your stakeholders and your board
  • your wider organisation.

This is not something that only affects digitally advanced organisations. If you connect with the people you support online in any way, it affects you.

All these services (and more) can be run in ethical or unethical ways:

  • Collecting emails and sending newsletters
  • Managing memberships and donor data
  • Creating online grant processes
  • Running chatbots
  • Carrying out online consultations
  • Taking online donations and Gift Aid
  • Running online shops
  • Running an online coffee morning

We don’t intend to do harm when we want to help people. We don’t want to do the following.

  • Breach someone’s privacy or confidentiality by sharing personal information.
  • Give data to a third party, without the permission of the person in question.
  • Do harm to the communities we serve with the technologies we serve them with.

Technology doesn’t police itself. Many of the digital services and technologies available for commercial use aren’t good or bad. But, how we use them can have a significant impact in creating positive or negative outcomes.

Your risks and responsibilities

Use these questions to help you think about your ethical responsibilities:

  • What are you trying to achieve?
  • How can this technology help or harm people?
  • What does a good service look like for your community?

There are eight areas of risk. This page focuses on four that are most relevant to charities and other organisations.

  • Do people understand how you're using their data? (Implicit trust and user understanding)
  • Do you make it easy enough for people to know what data you have? (Data control and monetisation)
  • Do you use any automated systems that could contain or increase unfairness? (Machine ethics and algorithmic bias)
  • Are you expected to share what you know about people with the government? (The surveillance state)
A circular diagram introducing 8 risk zones. They are: Risk Zone 1: Truth, Disinformation and Propaganda  Risk Zone 2: Addiction and the Dopamine Economy Risk Zone 3: Economic and Asset Inequalities  Risk Zone 4: Machine Ethics and Algorithmic Bias Risk Z
8 Risk Zones described by Omidyar Network’s Ethical OS Model

Implicit trust and user understanding

Misuse of data is a serious problem. Do you always understand what you’re signing up for?

Very often the people you support won’t have the skills to work out what permission they’re giving on the websites and apps they use. They need our help to understand what you’re doing with their information.

  • Do you track what they do on your website?
  • Do you use that information to analyse how well your website is doing?
  • Does it change what they see?

The website is just one example. You also need to think about any databases or mailing lists you have. Is that covered in your data protection and privacy policy?

The main ethical message is that asking people to give permission is not enough. You need to help people understand what they’re giving permission for.

Data control

Data protection law gives everyone the right to ask to see what you’re doing with any information you hold about them.

Over the next few years people will expect to be able to see this more clearly and have much more control over this information. They’ll want this at a push of a button. They’ll become more and more keen to protect data about themselves and select how it's being used.

You need to think about how to make it easier to tell people what data you’re using and in what ways. You need to make it easier for them to give you their information safely. You also need to make it easier for them to take that permission away again.

Machine ethics and algorithmic biases

Machine learning and algorithms are elements of AI (artificial intelligence). Charities, social enterprises and public sector organisations are exploring how they can use this type of AI for things like:

  • helping people find the right information
  • offering counselling or self-help activities
  • getting donations through voice-activated software like Alexa.

These types of AI can pick up bias from the materials (datasets) that are used to train them. They can make the bias even stronger.

If you’re starting to work in this area you need to ask probing questions. You need to understand where the materials and datasets that train the systems come from. Ask any partner organisations you’re working with on this for their ethical policies and practices.

Surveillance state

The great connectivity of the digital world brings a risk. Many communications applications track emails and text messages or call content. GPS tracking can monitor where people are through their phones. Storing sensitive personal information about people in databases when they need help or support means that information exists as a record.

Depending on the law in each country police and governments can demand access to these records. Arrests and torture have sometimes been a result.

This page was last reviewed for accuracy on 02 March 2021

Back to top

Sign up for emails

Get regular updates on NCVO's help, support and services