Original Article

Medium 

Substack

 

Facebook was once seen as a neutral space to connect with friends, family, and customers; today, it has become an unstable, often hostile system where ordinary users, small businesses, and even long-time advertisers can lose everything overnight, with little hope of appeal.

This growing pattern of hacks, faulty “selfie” checks, opaque bans, and ad fraud is no longer a series of isolated glitches — it looks like a systemic failure of accountability.

 

Locked out by selfie systems

 

In recent years, Facebook has leaned heavily on “video selfie” and IDverification systems that often malfunction, misread faces, or simply never complete the review. Users report being told to upload a video selfie or ID, doing exactly what the system asks, and then seeing their accounts locked permanently anyway—sometimes after a decade or more of use. These accounts often contain years of photos, messages, business pages, and ad history, yet users receive only generic notices that they “failed” verification and that the decision is final.
One typical scenario involves a longtime user asked to confirm identity after logging in from a new device. The user submits the selfie or ID multiple times, only to encounter error messages, looping screens, or silence. Eventually the account is disabled. There is no live support, no clear explanation of what went wrong, and no human review reachable through normal channels. For many, this feels less like security and more like automated dispossession.

 

Scammers thrive while victims are ignored

 

At the same time, scammers and hackers appear to navigate the platform more easily than legitimate users. People routinely have their accounts taken over by impostors who then send fraudulent messages, run crypto or investment schemes, or request money from friends and family. Victims often discover the problem only after friends alert them or after seeing strange purchases or ad charges tied to their profiles.
Yet when those victims attempt to regain access, they encounter the same rigid, automated walls: password resets that go to the hacker’s email, recovery links that fail, and forms that generate canned replies. In some cases, users are blamed for “violating community standards” because the hacker used their account for spam or scams, and the platform disables the account altogether. The scammer walks away to the next target; the original owner is left with no way back in.

 

Advertisers paying for bots and broken systems

 

Advertisers face a parallel nightmare. Many small and midsize businesses describe running campaigns that show strong “click” numbers but very little real engagement or sales, suggesting that bots, click farms, or lowquality traffic are consuming their budgets. They may see thousands of impressions and clicks in the ad manager, yet no corresponding inquiries, email signups, or purchases. When they question these results, they often get boilerplate responses insisting everything is working as intended.

Some advertisers also report unauthorized charges or campaigns launched without their consent after an account compromise. Others say their ads are abruptly rejected for vague reasons, such as “policy violation,” even when they’ve been running similar ads for years. Appeals, if available at all, are slow, cryptic, or simply ignored. In many cases, substantial ad spend—sometimes hundreds or thousands of dollars—is lost without refund or meaningful explanation.

 

Sudden bans and no real appeals

 

Perhaps the most disturbing pattern is the speed and finality with which Facebook can ban users and advertisers—contrasted with the near impossibility of getting those decisions reviewed. Accounts vanish after algorithmic decisions about “community standards,” “suspicious activity,” or “policy violations,” often without citing any specific post or behavior. For business owners, losing a Facebook account can mean losing their Facebook Page, ad account, and access to customers all at once.

Appeal processes, on paper, exist. In practice, they are frequently:

Hidden behind layers of menus and nonfunctional forms
Limited to oneclick “disagree” buttons with no place to explain the situation
Responded to by further automated messages repeating the original decision
Many users and advertisers never reach a human being. The experience is particularly infuriating for those who have followed the rules for years, only to be treated as disposable when an algorithm misfires or a hacker abuses their profile.

Why this is more than “just a website”

What makes these problems so serious is that Facebook functions, in many ways, like critical infrastructure. People use it to:

  • Run and advertise their businesses
  • Organize community events and nonprofits
  • Maintain social and professional networks built over years
  • Store memories, photos, and messages that may exist nowhere elseWhen one company controls that much of the social and advertising ecosystem, arbitrary loss of access becomes a form of realworld harm: lost income, damaged reputations, emotional distress, and even practical safety concerns when key contacts or support networks disappear.
    The imbalance of power is stark: a highly automated system can terminate an account in seconds, but the affected person may spend weeks or months trying—and failing—to get a single human response.

 

What individuals can do now

 

Until the system changes, people and businesses can take selfprotective steps:

 

  • Back up everything: Regularly download copies of important photos, messages, and contact information so your digital life is not locked inside a single platform.
  • Harden account security: Use strong, unique passwords and multifactor authentication, and regularly review login alerts and authorized devices.
  • Separate personal and business assets: Keep critical business data, email lists, and customer relationships in independent systems (websites, CRMs, email services), using Facebook only as one of several channels.
  • Monitor ad performance critically: Track conversions outside the platform (website analytics, sales systems), and be prepared to pause campaigns quickly if performance looks suspicious or inconsistent with realworld results.
  • Document everything: Save screenshots of errors, notices, ad metrics, and support exchanges in case you need to present a clear record to regulators, consumerprotection agencies, or legal counsel.These steps cannot fix the underlying structural issues, but they can reduce the damage when something goes wrong.

 

What government and regulators can do

 

Because the harms are widespread and systemic, meaningful change will likely require public pressure and regulatory action. Policymakers and regulators could:

 

  • Mandate basic due process online
  • Require large platforms to provide clear reasons for account closures and ad rejections, along with accessible, timely appeal routes involving trained human reviewers for users and advertisers.
  • Set minimum standards for identity systems
  • Establish guidelines for verification tools, like selfie and ID checks, to reduce false failures and require clear remedies when systems malfunction or wrongly deny access.
  • Increase transparency on fraud and enforcement
  • Compel platforms to publish regular, independently audited reports on hack rates, scam incidents, ad refund practices, and appeal outcomes, so the public can see how often systems fail.
  • Impose penalties for negligent practices
  • Where platforms repeatedly profit from clearly flawed systems—such as charging advertisers for invalid traffic or refusing refunds after proven account compromises—regulators can impose fines or restitution requirements.
  • Support interoperability and competition
  • Encourage alternative platforms and open standards so that users and businesses are not entirely dependent on any single company’s opaque policies and tools.When platforms operate at the scale of public utilities, the law should not treat their failures as mere “customer service issues.” Ensuring fairness, transparency, and due process in digital spaces is now a core consumerprotection and democracyprotection issue.

Facebook’s problems with selfies, scammers, bans, and broken ad systems are not just technical glitches; they reflect a deeper design choice to prioritize automation and revenue over reliability and accountability.

Until that changes, users and advertisers must protect themselves—and demand that lawmakers set clear rules so no single company can arbitrarily erase their digital lives.

 

 

 

For more information and to set up interviews, contact ALB Games at the information below.

Karen Andrews
Executive Assistant
Changemakers Publishing and Writing
San Ramon, CA 94583
(925) 804–6333
info@changemakerspublishingandwriting.com

Gini Graham Scott, Ph.D. is the author of over 50 books with major publishers and has published 200 books through her company Changemakers Publishing and Writing (changemakerspublishingandwriting.com).

She writes books, proposals, and film scripts for clients, and has written and produced 18 feature films and documentaries, including Conned: A True Story and Con Artists Unveiled¸ distributed by Gravitas Ventures. (changemakersproductionsfilms.com).

Her latest books include Ghost Story and How to Find and Work with a Good Ghostwriter