How to Tell Which Crisis Images Are Real – And Which Are Fake

During Hurricane Sandy, NPR posted this image showing soldiers at Arlington National Cemetery guarding the tomb of the unknown soldiers. Though the outlet reported it was taken during the storm, it was actually taken several months before (Photo Credit: NPR).

During Hurricane Sandy, NPR posted this image showing soldiers at Arlington National Cemetery guarding the tomb of the unknown soldiers. Though the outlet reported it was taken during the storm, it was actually taken several weeks before (Photo Credit: NPR).

A few days ago, the Afghan government published an investigation into an airstrike by international forces on January 15, 2014, that reportedly killed several Afghan civilians. The investigation relied heavily on photographs and a video showing the aftermath of the strike.

In the context of the Houla massacre in Syria in May 2012, the BBC published a distressing image, showing a child jumping over a row of dead bodies.

During Hurricane Sandy in October 2012, NPR posted an image showing soldiers at Arlington National Cemetery weathering the storm and guarding the tomb of the unknown soldiers.

What do these examples have in common? They all represent authentic images – meaning the events they depict actually took place – that were depicted inaccurately: As the New York Times reported this week, the Afghan investigation included two images that were at least three years old.

BBC used an image that was widely circulated on social media in the context of the Houla massacre in Syria. However, the image was taken 10 years earlier in Iraq (Photo Credit: BBC).

BBC used an image that was widely circulated on social media in the context of the Houla massacre in Syria. However, the image was taken 10 years earlier in Iraq (Photo Credit: BBC).

BBC, in a major blunder, used a widely circulated image on social media, which turned out to have been taken almost ten years earlier in Iraq in 2003.

And the soldiers at Arlington really weathered the rain – only that specific picture showed them several weeks before Sandy hit.

Technically manipulated or staged pictures or videos often receive major attention when debunked. However, it is the authentic material accompanied by erroneous information – mistaken or intentional – that poses the main challenge for journalists and human rights workers. With the wide distribution of free and easily accessible verification tools, these mistakes become increasingly inexcusable.

An Urgently Needed Verification Toolbox

All of the aforementioned mistakes could have been easily discovered within a few minutes by doing a reverse image search online, using free tools such as TinEye or Google Images, which allow you to search by an image URL or by uploading an image.

This is only one technique that is described in a new, important (and free) resource for verification of user generated content, Verification Handbook: A Definitive Guide to Verifying Digital Content For Emergency Coverage, published today (for full disclosure, I am one of the contributors of the book).

Although its main audience are journalists, humanitarian responders and human rights researchers that work on crisis situations, it is a resource that will ultimately benefit a much larger audience. The need for such a resource is without question. Too many times images and videos, distributed through social media, provide erroneous context, an issue that becomes most problematic during emergencies, as Craig Silverman and Rina Tsubaki, the driving forces behind the new book, describe in the introduction:

(…) the work of verification is perhaps most difficult in the very situations when providing accurate information is of utmost importance. In a disaster, whether its cause is natural or human, the risks of inaccuracy are amplified. It can literally be a matter of life and death.

To be clear, while many of the book’s contributors come from journalism, verification challenges are not a problem limited to journalists. Approximately a year ago, I posed the question if video could document possible war crimes.

Today, more than ever, I believe the answer is yes. However, to take advantage of the new trove of potential evidence, such as citizen images or video, in human rights advocacy reports, let alone in legal proceedings, we need increased verification capacities and trainings for human rights investigators. The new opportunities and pitfalls are widely recognized both by human rights watchdogs, such as Amnesty International, or tribunals, such as the International Criminal Court:

The sudden explosion of information from previously unknown sources has posed many challenges in verification for international media and human rights organizations. Yet, without citizen journalists reporting from their neighborhoods, often at great risk to their own safety, news of many of the abuses, including crimes against humanity and war crimes, might never have reached the outside world – Amnesty International: Shooting the Messenger: Journalists Targeted by All Sides in Syria. May 2013.

Such evidence could include photographs, videos, or messages posted on Facebook, YouTube, Twitter or other social messaging sites. While this increase in footage of potential crimes offers the Court an incredible opportunity to gather evidence, it also poses significant technological problems. (…) the Court needs to consider how it will deal with the increasing volume of information and potential evidence captured on cell phones and other mobile devices, and how to verify the authenticity and integrity of photographs and videos uploaded to the Internet — Beyond Reasonable Doubt: Using Scientific Evidence to Advance Prosecutions at the International Criminal Court. October 2012.

A Comprehensive Reference

The new book gives an overview of the state of the art in digital verification techniques and tools, and is an indispensable reference for everyone attempting to find the truth in the digital haystack. I have already added it as required reading for participants in our Citizen Media Evidence Partnership, in which we train college students in video validation.

Especially noteworthy is the fact that the book doesn’t stop at verification tools, which will surely will require updating soon, as new tools emerge. The book concludes with two extremely important issues, when dealing with user-generated content that depict human rights events and potential violations. First, Madeleine Bair of WITNESS addresses the question of how to assess and minimize risks when working with user generated content. The principle that has to be above all others is to first, do no harm, by not exposing human rights defenders in widely distributed videos or photos, or re-victimizing people that experience abuse in a video.

One example she lists is, “Iran’s Green Revolution of 2009, when the Islamic Revolutionary Guard used photos and video stills they found online to target protesters and crowd source their identification, actions that sent a chill through the activist community.

Secondly, Gavin Rees of the Dart Centre Europe flags the risk of experiencing secondary trauma when working with highly graphic content coming out of conflict zones or other human rights hot spots. It is a real risk, and I highly appreciate the editor’s effort to address this issue. After all, all of us working in this field want to have a positive impact. This is only possible when we take good care of ourselves first, something that can be too easily brushed aside when disaster strikes.

Verification Handbook. A definitive guide to verifying digital content for emergency coverage. Edited by Craig Silverman. Available January 28, 2014.

Want to Dig Deeper into this Topic?

AIUSA welcomes a lively and courteous discussion that follow our Community Guidelines. Comments are not pre-screened before they post but AIUSA reserves the right to remove any comments violating our guidelines.