Introduction

Audio Forensic Analysis Procedures for User Generated Audio Recordings

Audio Forensic Analysis Procedures for User Generated Audio Recordings

This webinar originally occurred on November 29, 2022
Duration: 1 hour

Overview

The widespread use of handheld smartphones and other devices capable of recording audio and video means that user generated recordings (UGRs) are increasingly presented as evidence in criminal investigations. There is a need to determine how best to combine the information available from multiple UGRs. The presenter’s research was conducted to increase the audio forensic knowledge base by developing new and innovative techniques to synchronize and process multiple concurrent ad hoc audio recordings from a crime scene obtained from body cameras, cell phone videos, surveillance cameras, dashboard cameras, and other recording devices.

When two or more audio devices are operating concurrently from different spatial locations while recording the same sound source, the recordings will not be identical, but we would expect a good correspondence, or correlation, among the recordings. The sound received at each microphone will differ due to the directionality of the source and microphones, the different distance between the source and each microphone, and the presence of sound reflections, background noise, and reverberation. The recordings will have unsynchronized start and stop times, different durations, and typically imprecise or unknown spatial location information. Therefore, the examiner must consider how to combine the multiple audio recordings in a manner that leads to a useful and meaningful investigative conclusion.

User generated audio recordings often come from mobile devices (e.g., smartphones, tablets, handheld recorders) that generally can edit or otherwise alter the recorded information internally without transferring the data. An audio forensic examiner needs to assess whether the integrity of the recording could be compromised, either deliberately or inadvertently, during the investigation. It is essential to develop and implement a standard protocol for receiving, tagging, and processing the audio evidence while maintaining a documented chain of custody. Moreover, methods to identify possible insertions/deletions or other alterations need to be applied.

As noted above, in many cases the UGRs as well as recordings from law enforcement, will contain noise. Wind gusts, traffic, crowd noise, footsteps, and other unintended sounds are very common in most recordings, and these interfering sounds may be louder than the sound sources of interest for synchronization purposes. There are numerous algorithms available for noise reduction, de-clipping, and filtering, but it is important to understand the potential ways in which noise reduction can alter the temporal characteristics of the signals, which in turn will alter the reliability of time synchronization. Furthermore, there are often important audio forensic observations derived from the low-level background sounds in a recording, including tell-tale mechanical sounds or possible evidence of unexplained deletions or insertions.

The availability of UGRs offers many important audio forensic insights. The proposed examination methodology includes time synchronization, noise reduction, and spatial position estimation. The proposed methods of forensic handling of UGRs also entail establishing best practices for assessing authenticity and integrity of the recorded information. The goal of the presenter’s research is to understand the forensic interpretation limitations of evidence obtained from UGRs, especially in terms of audio bandwidth, recording quality, and questions of authenticity.

The research presented within this webinar was funded by the National Institute of Justice (Award Number: 2019-DU-BX-0019). 

Detailed Learning Objectives

  1. Attendees will learn the principal advantages and challenges of user generated audio recordings.
  2. Attendees will understand basic concepts about audio authentication.
  3. Attendees will learn several common applications of user generated recordings in forensic audio analysis.

Presenter

  • Rob Maher, Ph.D. | Professor of Electrical & Computer Engineering, Montana State University

Funding for this Forensic Technology Center of Excellence webinar has been provided by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice.

The opinions, findings, and conclusions or recommendations expressed in this webinar are those of the presenter(s) and do not necessarily reflect those of the U.S. Department of Justice.

Contact us at ForensicCOE@rti.org with any questions and subscribe to our newsletter for notifications.


Related Content

Discussion of the FTCOE’s Guidance Document on Considerations for Photographic Documentation in Sexual Assault Cases

Publication Sexual Assault Report, January/February 2024 Author Mikalaa Martin | RTI International Overview In August 2022, the FTCOE published a report, Guidance Document on Considerations for Photographic Documentation in Sexual Assault Cases, which presents photographic documentation practices and techniques for…

FLN-TWG: A Roadmap to Improve Research and Technology Transition in Forensic Science

← Back to FLN-TWG Main Page  Forensic Laboratory Needs Technology Working Group (FLN-TWG) The National Institute of Justice (NIJ), in partnership with the Forensic Technology Center of Excellence (FTCOE) at RTI International, formed the Forensic Laboratory Needs Technology Working Group…

FLN-TWG: Updating Data Collection for Digital Evidence Casework in Project FORESIGHT

← Back to FLN-TWG Main Page  Forensic Laboratory Needs Technology Working Group (FLN-TWG) The National Institute of Justice (NIJ), in partnership with the Forensic Technology Center of Excellence (FTCOE) at RTI International, formed the Forensic Laboratory Needs Technology Working Group…