Legal Insights on Social Media Screening
This blog is a recap of our November webinar panel questions and Q&A on the legal landscape of social media screening - now available to watch on demand here. This is focused on US-based law but some principles can be applied elsewhere. If you’re curious about background screening law in Central and South America, check out this informative post from our partner, Latin America Backgrounds.
Social media screening is becoming more popular and important, and doing it right—legally—is paramount. So on to our questions and summarized answers provided by industry experts Scott Paler, Partner at Dewitt, LLP and and Darrin Lipscomb, CEO at Ferretly.
There are easily four legal advantages to running a social media background check.
The first is helping to avoid negligent hiring claims. When negligent hiring claims arise, they can be explosive. I saw one last week in Miami that resulted in a jury verdict of $141 million. In these situations, an employer hires someone who goes on to do something terrible, and the question becomes whether the employer could have avoided the harm if they’d done more due diligence during hiring. Social media background checks offer a way to avoid that type of risk because they provide insight that might help an employer head off future harm.
The second advantage is weeding out the chronic complainer. After years of litigating employment cases, a common thread emerges among plaintiffs—they often fit into a "chronic complainer" category. These are individuals who blame circumstances or others for problems, never themselves. Social media checks can sometimes reveal whether someone fits into this category.
The third advantage is avoiding workplace harassers. Harassers are often repeat offenders. Social media can provide insight into whether someone is engaging in commentary or conduct that suggests they might create harassment issues in the workplace.
The fourth advantage relates to monitoring existing employees. Employers can run social media background checks on current employees to identify potential harassment or other issues, such as one employee harassing another via social media.
There’s no federal law prohibiting employers from running social media background checks. About 35 states have laws restricting employers from accessing private social media pages of applicants or employees. However, these laws don’t prevent employers from accessing publicly available social media information.
For example, some states like California and New York prohibit discrimination based on political affiliation, such as participating in a peaceful protest. But behavior like violent protests or harassment is not protected, and Ferretly’s software allows users to tailor what behaviors they surface to avoid running afoul of state laws.
Yes, it can if done poorly. Title VII prohibits workplace discrimination, harassment, and retaliation, and includes protections for minorities and other groups. One potential risk of running a social media background check is exposing information about protected characteristics, such as someone’s religion, medical conditions, or union affiliations. Employers are better off not seeing this type of information because it could lead to accusations of discrimination if the candidate isn’t hired.
The best approach is to work with a professional social media background screening service that filters out protected information and focuses on relevant behaviors.
Yes, those cases are starting to emerge. Recently, the Ninth Circuit Court ruled on a case involving an employee with an Instagram page that promoted horrific content, including violence against women. This employee also made specific threats toward a coworker on his page. When the employer became aware of this, the court determined that the employer could be held liable for harassment, even though the comments were made off-duty and on a private social media page. The court concluded that employers must investigate social media content that bears on the terms and conditions of employment, even if it occurs outside the workplace. This signals that social media is becoming a material consideration for employers assessing risk management.
The number one reason is to eliminate bias. A consistent set of rules and values must be applied across the organization. For example, a store manager in Arkansas might review candidates' LinkedIn or Facebook profiles differently than a store manager in Boston. Outsourcing ensures consistency in the process.
Efficiency and thoroughness are other key reasons. Professional screening services ensure a higher level of accuracy. Additionally, under the Fair Credit Reporting Act (FCRA), outsourcing provides a mechanism for candidates to dispute any incorrect findings in the report. Without outsourcing, an ad hoc review by a store manager could result in mistakes, such as reviewing the wrong profile.
A good screening provider also filters out information employers shouldn’t see, such as medical conditions or union affiliations. This helps mitigate risk and ensures compliance with employment laws.
It’s a multi-step process. At Ferretly, for instance, we use technology to identify social media profiles that match the candidate. The technology looks for identifiers like name, location, employer, email, or other unique attributes. We aim for a high level of confidence, requiring at least three unique identifiers to match before including a profile in the report. If confidence is not high enough, the profile is excluded.
Additionally, certified analysts review the findings to ensure accuracy. While no process is perfect, we strive to minimize errors. If a profile match is disputed, there is a process to reinvestigate and correct any inaccuracies.
Social media screening typically targets the most widely used platforms, covering those with hundreds of millions of daily active users. At Ferretly, we focus on seven platforms that account for about 95% of social media users worldwide - Facebook, X, Instagram, Reddit, TikTok, LinkedIn and TikTok.
As for how far back screening goes, the FCRA generally limits reviews to seven years for employment purposes. However, exceptions may apply depending on the use case or the nature of the screening.
Yes, the FCRA applies when social media checks are conducted for employment purposes. The Act governs how background screening companies, known as consumer reporting agencies, and employers use and handle the information. Employers must follow specific procedures to ensure compliance.
There are two main procedural requirements under the FCRA:
Employers who adopt these processes typically find them straightforward once implemented.
Yes, for several reasons. Social media offers a unique, unvarnished look at candidates’ behavior and character outside of the structured interview process. It provides insight into how they act when they’re not presenting their "best self."
That said, it’s crucial to conduct social media screening the right way. Sloppy or inconsistent checks can create unnecessary risks. Partnering with a professional service ensures that screenings are done effectively and in compliance with legal standards.
This depends on the employer's goals and risk tolerance. Some advocate conducting screenings at the conditional offer stage, aligning with how criminal background checks are typically handled. Others believe earlier screening offers more value, as it can inform decisions earlier in the hiring process.
There isn’t a universal answer, but the best approach balances compliance, risk management, and organizational needs.
The distinction has been addressed in legislative and judicial contexts. Generally, publicly available social media—content accessible without a login or specific permissions—is considered fair game for employers. In contrast, content behind a password-protected wall or shared privately is typically off-limits unless the individual consents to its review.
For example, a Ninth Circuit case recently defined public social media as any content accessible without a login, such as posts visible to the world or embedded on external websites. This clarity is helpful, particularly in the context of data privacy and social media usage.
From a screening provider’s perspective, we don’t make judgments. We stick to predefined classifications and definitions, ensuring objectivity. For example, we identify content that matches categories like prejudice or hate speech based on explicit criteria. The goal is not to decide if something is "good" or "bad" but to surface content that matches the specified criteria.
Employers or readers of the reports may apply their own judgment when reviewing the flagged content. This is inherently subjective, similar to other parts of the hiring process. Employers must evaluate whether flagged behavior aligns with their company’s values, culture, and policies.
Ultimately, it’s about striking a balance. Screening providers deliver the data objectively, while employers interpret it through the lens of their organizational standards.
Screening providers aim for objectivity, relying on defined classifications like hate speech, violence, or prejudice. The goal is to flag content that matches those criteria, not to make subjective judgments.
If an account is private and discoverable, we still include it in the report if it meets our confidence criteria, but we clearly indicate that it’s private and could not be fully analyzed. We also document any information we can glean, such as bio details or engagement metrics, while respecting privacy limitations.
It depends on the buyer and the use case. For example:
The key is to tailor your messaging to the specific pain points of each audience segment. Different industries and roles will prioritize different value propositions.
This isn’t a significant trend we’ve observed yet. While some individuals use private or burner accounts, many in younger generations still maintain public profiles, particularly for platforms like Instagram and TikTok, where they seek visibility and engagement.
However, addressing burner accounts or hidden behavior is a challenge. The process involves using advanced search techniques and human analysts to identify potential connections between accounts and individuals. While no system is perfect, continuous improvements in AI and analytics help enhance accuracy.
The FCRA generally applies to employment, licensing, insurance, and credit purposes. For example, if a company conducts a social media check to monitor the reputation of a public figure representing their brand, this could fall outside FCRA because it’s not tied to employment or one of the other permissible purposes.
That said, many organizations assume FCRA applies to err on the side of caution, even in ambiguous situations.
Yes, the general consensus is that volunteer checks fall under the FCRA’s employment purposes prong. While "volunteer" may not sound like employment, federal advisory opinions have broadly interpreted the law to include these scenarios. For now, it’s safest to assume FCRA compliance is required.
Yes, depending on the context. For example:
Ultimately, it’s case-specific, and caution is key to staying compliant.
The process mirrors standard background check disputes:
Disputes are rare but typically involve mistaken identity or misattributed content. Strong processes and human analysts minimize errors.
We constantly evaluate new platforms based on their user base and relevance. Platforms like Telegram, BlueSky, or Threads are on our radar but currently have lower adoption rates compared to the major platforms we cover.
In Q1 of next year, we plan to roll out video analysis, including platforms like YouTube, to expand our coverage.
Conclusions
The legal landscape of social media background checks is indeed navigable. If you have hesitation or questions, reach out to our co-presenter of this webinar, Scott Paler at DeWitt LLP who specializes in employment law, specifically around background screening. If you want to learn more about how Ferretly can streamline your social media screening, schedule a quick demo with our team!