Bot or Not? Authenticating Social Media Evidence at Trial in the Age of Internet Fakery
Prosecutors offer Facebook posts to show that a gang leader “green lighted” the hatchet killing of a homeless man for “snitching” on him.1 A plaintiff in an Internet stalking case offers the hundreds of abusive emails she received from anonymous senders after spurning the defendant’s advances.2 The government secures a conviction for illegal firearm possession by offering Facebook photos of the defendant with a .45 caliber pistol—but no physical evidence.3
These cases illustrate how social media evidence has become an important feature of modern trial practice, just as it affects how we shop, work, eat, vote, watch TV, and interact with one another. We can summon and use social media virtually instantly with smart phones—devices the Supreme Court recently called “almost a feature of human anatomy.”4 Given social media’s pervasiveness in our culture, and the frequency with which people use it compared to other forms of communication, social media evidence is a broader and deeper trove of courtroom evidence than has ever been available before. At the same time, however, social media evidence is uniquely vulnerable to alteration or forgery, particularly as advances in technology allow so-called “bot” accounts to create social media content autonomously.5
A new frontier brings new challenges
Offering instant messages, tweets, and social media posts of all types at trial is now commonplace. Such evidence can be useful, for example, to prove a party’s mental state or to prove that someone was in a given place at a given time—like on a ski slope days after an alleged injury.6 Even before trial, social media may provide strategic value—for instance, if a plaintiff’s statements on product-review forums contradict the allegations in a consumer class action complaint—that could potentially help a defendant secure pretrial dismissal.
But while social media has improved our ability to tell the jury “what really happened,” it also creates new challenges for how that story can be told. The jury cannot see evidence unless it is authenticated and admitted. Federal Rule of Evidence 901(a) (and numerous state analogs) requires the proponent of evidence to “produce evidence sufficient to support a finding that the item is what the proponent claims it is.” This standard imposes a relatively low bar, requiring “[o]nly a prima facie showing of genuineness . . . ; the task of deciding the evidence’s true authenticity and probative value is left to the jury.”7 Compared to a voicemail, a letter, or even an email, however, authenticating social media evidence can be challenging due to “the ease with which a social media account may be falsified or a legitimate account may be accessed by an imposter.”8 Thus, lawyers must lay a foundation that addresses the “concern that someone other than the alleged author may have accessed the account and posted the message in question.”9
Courts sometimes disagree on what must be shown to satisfy this concern. Some impose a relatively high bar, requiring the proponent to all but eliminate the possibility of phony authorship.10 Others hold that social media evidence is just like any other type of evidence,11 requiring only the introduction of facts from which a reasonable juror could find that the evidence was created by the purported author. We submit that the permissive approach aligns better with the text of Rule 901 and is thus correct.12 Rule 901(a) requires only a preliminary showing that the evidence is what the proponent claims; this “does not require . . . rul[ing] out all possibilities inconsistent with authenticity.”13 Evidence that an imposter created the content might be a basis for admitting the evidence conditionally under Rule 104(b) or for excluding it under Rule 403, but it should not affect whether Rule 901’s threshold for authentication can be met.14 Once the proponent presents enough evidence for a reasonable juror to find that the author was who the proponent asserts, evidence suggesting otherwise may affect the weight the jury gives the evidence but should not impact its admissibility.15
Even so, some courts continue to apply the more stringent approach.16 For example, in United States v. Vayner, the U.S. Second Circuit Court of Appeals reversed a district court’s decision to admit screenshots from a social media profile that contained the defendant’s name, photo, and work history.17 Vayner holds that merely presenting evidence proving that a post came from a particular user’s account is insufficient to authenticate the post as actually coming from that user.18 Regardless of which approach is correct, lawyers cannot take for granted that courts will rule in their favor on evidentiary issues—particularly those involving complex technology and novel evidence in the heat of trial, amid numerous other evidentiary motions and objections.
Authenticating Social Media evidence at trial
Lawyers offering social media evidence at trial should be prepared to “over-authenticate” their evidence by laying a foundation that, if possible, substantially eliminates the possibility that an imposter created the content. If a witness will admit to authoring a post or owning a social media profile, and can lay a foundation supporting that admission, then the proponent’s work should be done.19 But in criminal cases (and even some civil ones), the Fifth Amendment may make this type of testimony unavailable if the witness believes that providing such testimony could be self-incriminating. Regardless, adverse witnesses often will simply be unwilling to admit they created a post or that they can remember doing so. Authentication of social media evidence should thus rely on foundational testimony about three topics: (1) circumstantial evidence of authorship or account creation, (2) how the evidence was identified and verified (i.e., “chain of custody”), and (3) how the social media platform itself provides the evidence with indicia of reliability. Below we suggest three ways a proponent can provide this authentication.
1. Circumstantial evidence of authenticity
Witnesses can testify from personal knowledge about “contextual clues in the communication tending to reveal the identity of the sender.”20 This is the type of testimony that Rule 901(b) contemplates for circumstantially authenticating any type of communication. Consider the following lines of questioning:
Does the evidence contain information—photos, friends, locations, etc.—that is consistent with a witness’s testimony about the asserted author or of how that person writes, speaks, or behaves? For instance, in Allen v. Zonis, an Internet stalking case in which one of the authors of this article was appellate counsel, the plaintiff testified that the writing style in abusive emails she received from anonymous senders matched that from messages the defendant had sent her previously.21 Also, in Burgess v. State, a Myspace account bearing the name “Oops” was properly authenticated through an officer’s testimony that he had confirmed with the defendant’s sister that the defendant’s nickname was “Oops.”22
Have witnesses previously communicated with the asserted author using this profile? In Allen, the plaintiff’s authenticating testimony included the fact that she received the anonymous, threatening messages at an email account that only the defendant had ever used to communicate with her. This illustrates how linking a previously used communication channel with the purported author can be an effective means of establishing genuine authorship.
Does the post include a username that is consistent with posts on other platforms that are more readily linked to the asserted author? For example, even if a Facebook page contains no photos or uses a false name, witness testimony that the same name appears on other social media platforms containing visual depictions of the purported author can be sufficient to authenticate the Facebook page.23
Have the asserted author’s offline activities ever corresponded to events or experiences described over social media? This can be a particularly persuasive way to authenticate social media evidence. Even a single instance where, for example, the purported author met with someone after arranging the encounter through social media can be enough to authenticate not only the messages arranging the encounter, but all messages coming from the account in question.24
Do timestamps or geolocation data associated with the post help connect it to particular people or events? Social media posts often contain information indicating the date, time, and location of the post’s creation.25 Witness testimony that the purported author was in that location on that date can thus help authenticate the evidence. This type of data is not always accurate, however,26 and attorneys should be prepared to offer testimony explaining any discrepancies.27
2. “Chain of custody” evidence
Offering testimony from investigators, electronic discovery specialists, or expert witnesses can help authenticate social media evidence by establishing the evidence’s “chain of custody,” that is, how the proponent’s investigation identified the information, verified it, and led to its inclusion in the exhibit offered at trial. In particular:
How was the evidence identified and then copied, reproduced, or transcribed into the exhibit being offered in court? This testimony should include a description from the witness of how the evidence was accessed and turned into an exhibit. For instance, an investigator could testify to accessing a particular website or app, taking a “screen shot” of the device’s monitor, and printing out the screen shot. A percipient witness can then testify as to whether the printout fairly and accurately reflects the social media evidence that the witness initially saw.
Do IP addresses or social media subscriber records link the evidence to a particular person? Social media companies may be compelled to disclose certain records in response to a subpoena, including subscriber information, which contains phone numbers or emails linked to a social media account, and IP address logs. Social media companies will generally also provide a certification from an authorized records custodian to establish a self-authenticating business record under Fed. R. Civ. P. 902(11).28 Note, however, that this certification establishes only “that the depicted communications took place between certain Facebook accounts, on particular dates, or at particular times,” which is not sufficient in isolation to authenticate the content of a social media post in relation to a particular author.29
What steps were taken to rule out other accounts with the same or similar usernames? Commonwealth v. Mangel affirmed the trial court’s denial of the prosecution’s motion in limine to admit Facebook communications where, among other things, a search on Facebook for the defendant’s name yielded five profiles under that name, contradicting a detective’s testimony that only one such account appeared during her search.30 This illustrates the importance of using multiple avenues to authenticate evidence; an investigator’s testimony about chain of custody may be insufficient in isolation if multiple profiles use the same name.
Did the proponent obtain the account’s username and password to verify the source of the evidence? The trial court in Mangel faulted the prosecution for not obtaining the username or password for the Facebook account at issue to confirm its authenticity. To the extent available, obtaining login credentials for a social media account—which, in theory, only the account’s true owner should possess—is a reliable means of authenticating the social media account. However, given the intimacy and breadth of personal information often contained in social media accounts, courts may be wary about compelling parties to produce their login credentials, particularly in civil cases.31
Were social media apps on devices in the asserted author’s possession logged in to accounts associated with the evidence at issue? In United States v. Lewisbey, the court held that incriminating Facebook posts were properly authenticated because (among many other circumstantial links between the defendant and the Facebook account) the Facebook app on a mobile phone confiscated from the defendant was linked to the account from which the incriminating statements were posted.32 Likewise, in an Internet child pornography case tried by one of the authors of this article, a computer in the defendant’s bedroom was logged into AOL Instant Messenger at the time of his arrest under a screenname involved in chat logs discussing child pornography.33 As mentioned above, in theory, only the true owner of a social media account has the means to access that account. Therefore, the fact that an account is accessible on a device in the asserted author’s possession is a particularly strong indicator of genuine authorship.
3. Technological safeguards of authenticity
Background information about a social media platform’s operation can explain how the platform, by design, seeks to guard against phony content. This might require testimony from an expert or from a representative of the social media company. For example: • Do the platform’s terms of service prohibit using false or invented profiles?
- Does the platform require users to create accounts using unique login credentials?
- Is the post from the account of a public figure whose identity the social media company has “verified”?34
- Must users verify their accounts using email confirmation, two-factor authentication, or other additional layers of security?35
- In the witness’s training or experience, how often has evidence of this type proven to be fraudulent, and what would one expect to see if that were the case?
Eliciting testimony on these issues in isolation likely will not be sufficient to authenticate the substance of a social media communication. However, covering all three of the areas discussed above—circumstantial evidence of authorship, chain of custody, and the operation of the platform—will help ensure that social media evidence is properly authenticated. Authentication is supposed to be a lenient standard. Once the proponent meets the low bar of authentication, arguments to the contrary should go to the weight to be given the evidence rather than to its admissibility, and it should ultimately be up to the trier of fact to accept or reject such evidence.
The next frontier: even more challenges
Three years ago, researchers used many hours of video from Barack Obama’s weekly addresses to teach an artificial intelligence program to map spoken-word audio onto video of mouth shapes. Researchers then used the program to create a photorealistic video of Mr. Obama appearing to speak the words from an audio clip of the researchers’ choosing.36 These techniques can be used to make convincing videos, known as “deepfakes,” of people appearing to say just about anything.37 Similar technology is being used to create photorealistic images of people who do not exist38 and to paint public figures such as Facebook CEO Mark Zuckerberg or House Speaker Nancy Pelosi in an unflattering light.39 Technology of this sort is becoming widespread, and similar types of digital deception are already prevalent,40 with one study estimating that between 9 percent and 15 percent of all Twitter users were not people but “bots,” software-controlled accounts “algorithmically generating content and establishing interactions.”41 The capacity to create convincing forgeries of social media content likely will continue to increase.
While authentication under the rules of evidence is a lenient standard, it must be scrupulously applied as the pervasiveness of digital fakery increases. Lawyers must be creative and thorough in authenticating social media evidence, presenting information not only linking evidence to an asserted author, but also tending to rule out links to potential imposters—a showing that courts are increasingly starting to require.42 Likewise, lawyers opposing the admission of evidence should require the proponent to demonstrate that evidence is not fabricated. For example:
- Is there any reason to think someone other than the asserted author would have the desire, means, and opportunity to falsely create the evidence?
- Do the social media company’s records indicate that the account in question was affected by a data breach, and if so, has the account’s password been changed since then?
- Does the platform allow users to modify or edit media before posting it?43
- Are there any identifiable instances in which someone other than the asserted author posted to the account in question? Were any necessary remedial steps taken, and were those steps documented?
- Does the platform actively review content in an effort to identify and remove false or misleading posts?44 If, so, how often and are such efforts documented?
- Is there any forensic evidence that indicates the evidence has been tampered with?45
Finally, lawyers should also consider whether, given the purpose for offering the evidence, authenticating authorship even matters. For example, in United States v. Vazquez-Soto, the First Circuit rejected the argument that Facebook photos were not properly authenticated, explaining that “the [Facebook] account’s ownership is not relevant. . . . [W]hat is at issue is only the authenticity of the photographs, not the Facebook page.”46 Thus, the court held that an agent’s testimony that he recognized the defendant in social media photos was sufficient to authenticate the photos, particularly since jurors could view the photos and rely on their own observations of the defendant in the courtroom.47 Similarly, in Penn v. Detweiler, the court denied a police officer’s motion to exclude Facebook videos allegedly showing the use of excessive force, which the plaintiff planned to present without testimony from the individuals who recorded the videos.48 The court reasoned that, since both parties were shown in the video, their testimony would be sufficient to authenticate it, essentially acknowledging that the videographer’s identity was irrelevant.49
This reasoning would be of no help, however, if the court is concerned that the content itself is manipulated.50 For example, in Gray v. Perry, the defendants moved to exclude an expert’s reliance on a YouTube video comparing the plaintiff’s song with an allegedly infringing song, arguing among other things, that “the creators of those videos may have changed the songs to make them sound more similar.”51 The court agreed, holding that without “testimony from the creators of those videos as to the manner by which they altered the sound recordings” the videos could not be properly authenticated.52
These are not idle concerns. In June 2020, the American Bar Association reported on a British family law proceeding in which a party doctored a recording of her spouse to make it sound like he was threatening her.53 Deepfakes are already here, and trial attorneys must adapt their authentication strategies to meet this new challenge. Presenting testimony from experts who understand digital fakes and are adept at identifying them54 may become an informal requirement.55 These concerns will be particularly important in criminal cases to ensure that the government does not knowingly or unknowingly use adulterated evidence to prove criminal culpability.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.