Humanitarian Response – Digital Infrastructure, Privacy, Risk
The humanitarian metadata problem: ‘Doing no harm’ in the digital era
ICRC, Privacy International
October 2018 :: 130 pages
About this study
New technologies continue to present great risks and opportunities for humanitarian action. To ensure that their use does not result in any harm, humanitarian organisations must develop and implement appropriate data protection standards, including robust risk assessments.
However, this requires a good understanding of what these technologies are, what risks are associat-ed with their use, and how we can try to avoid or mitigate them. The following study tries to answer these questions in an accessible manner. The aim is to provide people who work in the humanitarian sphere with the knowledge they need to understand the risks involved in the use of certain new technologies. This paper also discusses the “do no harm” principle and how it applies in a digital environment.
This study was commissioned by the International Committee of the Red Cross (ICRC) to Privacy International (PI). The study does not advocate for privacy or against surveillance. Rather, it maps out where surveillance may obstruct or threaten the neutral, impartial and independent nature of humanitarian action…
The past decade has seen a surge in the use of mobile telecommunica¬tions, messaging apps and social media. As they become more acces¬sible around the world, these technologies are also being used by the humanitarian sector to coordinate responses, communicate with staff and volunteers, and engage with the people they serve.
These exchanges lead to an increase in metadata: data about other data. In their most common form, metadata are the data that are generated around a message, but not the content of the message. Imagine that you are a clerk at the post office: content data would be information contained inside each parcel that comes your way. These content data are often pro¬tected by law and other technical safeguards. However, metadata – data that are found on the outside of the parcel or that can be inferred from the parcel’s appearance – are often less well protected. They can be accessed and read by third parties as they pass through the postal system.
What are metadata?
Today there are many forms of such data. In this report, we differentiate between declared data, inferred data, and interest or intent data. These data can be owned, processed, shared and stored for different periods of time, by different third parties, and under different jurisdictions applying different regulations.
This complex landscape requires that humanitarian organisations learn how to more systematically assess, understand, and mitigate the risks involved in programme activities that generate metadata.
Why should the humanitarian sector care about metadata?
Humanitarian organisations collect and generate growing amounts of metadata. They do this through their exchanges internally and with people affected by crises (e.g. sharing “info-as-aid” over messaging apps and/or via SMS and social media); their programmes (e.g. cash-transfer programmes that use mobile cash or smartcards); and their monitoring and evaluation systems (e.g. using data analytics on programme data to detect fraud).
To reconcile these actions with the “do no harm” principle, the humani¬tarian community must better understand the risks associated with the
generation, exposure and processing of metadata. This is particularly important for organisations that enjoy certain privileges and immunities but that are not able to counter these risks alone.
Processing data and metadata
Specifically, humanitarian organisations need to better understand how data and metadata collected or generated by their programmes, for human¬itarian purposes, can be accessed and used by other parties for non-hu¬manitarian purposes (e.g. by profiling individuals and using these profiles for ad targeting, commercial exploitation, surveillance, and/or repression).
For instance, information about an individual registered for a cash-trans¬fer programme can be accessed and used by the financial institution implementing the programme. The institution can then use this informa¬tion to categorise the individual as a non-trustworthy borrower, thereby limiting their access to financial services. If the institution has infor¬mation-sharing agreements with other institutions that are part of the same financial group, this sort of profiling can prevent the individual from accessing those institutions’ services as well.
Understanding the legal and policy landscape
To fully appreciate such situations, humanitarian organisations should map out who exactly has access to the data and metadata they generate and for how long. These factors are affected by the technical, legal and policy land¬scapes, which vary greatly despite efforts to streamline regulations (through initiatives like the EU’s General Data Protection Regulation, for example).
These landscapes are also changing as expanded access to data is sought by both public entities (e.g. to combat crime or follow migration flows) and private ones (e.g. to monetise user data or improve their business mod¬els). Moreover, some service providers may have an obligation to disclose data or metadata. For instance, a number of banks are obliged to flag “suspicious activity” on their client’s accounts or collect information about clients under Know Your Customer regulations designed to prevent money laundering and other criminal activity.
Where services intersect
The following section summarises the risks associated with the use of traditional telecommunication services (including voice and SMS), mes¬saging applications, cash-transfer programming and social media. While each type of service is discussed separately, they may overlap where fi¬nancial companies are also telecommunication companies or where social media providers also own messaging applications. This has implications for the amount of data and metadata any given entity has access to or can generate and for the variety of jurisdictions under which these data are generated and stored…
Digital trails could endanger people receiving humanitarian aid, ICRC and Privacy International find
Geneva (ICRC) – 07-12-2018 The humanitarian sector’s growing use of digital and mobile technologies creates records that can be accessed and misused by third parties, potentially putting people receiving humanitarian aid at risk, a joint report from Privacy International and the International Committee of the Red Cross (ICRC) has found.
The report – The humanitarian metadata problem: ‘Doing no harm’ in the digital era – explains how third parties could, for example, look at the metadata of someone’s mobile telephone messages to infer details like sleep patterns, travel routines or frequent contacts. That kind of information could pose risks to a person in a conflict environment.
“The ICRC hopes the report influences other humanitarian organizations to better protect their data,” said Charlotte-Lindsey Curtet, the organization’s newly appointed Director of Digital Transformation and Data. “Collaborating more closely with experts like Privacy International can help us to better mitigate these kinds of risks, in order to do no harm in a changing digital environment.”
The report details what metadata is collected or generated when humanitarian organizations use telecommunications, messaging apps or social media in their work. While the report doesn’t advocate for privacy or against surveillance, it demonstrates how ensuing surveillance risks could obstruct or threaten the neutral, impartial and independent nature of humanitarian action.
To remedy this, the report recommends a more systematic mapping of who has access to what information in order to anticipate how individuals might be profiled or discriminated against. It also encourages humanitarian organisations to improve digital literacy among their staff, volunteers – and most importantly, the people they serve.
“Technology is crucial if we want to engage with and better serve the needs of people we can’t physically access,” said Philippe Stoll, Head of Communication Policy and Support. “But using these platforms means creating an information trail we neither own nor control, and that’s something we must get better at anticipating.”
The report’s findings and recommendations will form part of the discussions at the ICRC’s Symposium on Digital Risks in Situations of Armed Conflict, taking place 11-12 December in London. Nearly 200 participants from humanitarian organisations, United Nations agencies, private tech companies, academia and government will attend.