On Amazon, an algorithm determines which product reviews should be highlighted. On Twitter, an algorithm determines which tweets should appear at the top of each user’s timeline. On Instagram, an algorithm determines in what order posts should be displayed.

In short, it’s almost impossible to find a popular digital service that doesn’t in some way employ algorithms to deliver content to users.

For marketers, the algorithimization of the web has been a fact of life for years.

While changes to algorithms have been the source of angst and frequently complaint, marketers have been forced to accept the fact that their success or failure on the web will in large part be determined by algorithms they don’t control and their ability to understand them and make the most of them.

Some marketers, of course, have fought against the way algorithms are used. For example, numerous companies have accused Google of tweaking its algorithm to favor its own properties, and such claims have frequently been cited in discussions about whether regulators should pursue anti-trust charges against the search giant.

But by and large, Google has escaped a Microsoft-like crackdown, perhaps in part because marketers themselves are an unfavorable lot to regulators and the public.

Now, however, a real war against algorithms appears to be underway.

Recently, German Chancellor Angela Merkel voiced the concern that “algorithms, when they are not transparent, can lead to a distortion of our perception, they can shrink our expanse of information.” She explained

I’m of the opinion that algorithms must be made more transparent, so that one can inform oneself as an interested citizen about questions like ‘what influences my behaviour on the internet and that of others?’

Her concerns are being echoed by others following Donald Trump’s stunning upset victory over Hillary Clinton in the 2016 US presidential race.

Now, many are accusing Facebook’s algorithm of helping Donald Trump win the election he wasn’t expected to win by allowing misinformation to be widely spread across its network.

Writing for New York Magazine, Max Read went so far as to claim that “Donald Trump won because of Facebook.”

He argues: “The most obvious way in which Facebook enabled a Trump victory has been its inability (or refusal) to address the problem of hoax or fake news.

“Fake news is not a problem unique to Facebook, but Facebook’s enormous audience, and the mechanisms of distribution on which the site relies — i.e., the emotionally charged activity of sharing, and the show-me-more-like-this feedback loop of the news feed algorithm — makes it the only site to support a genuinely lucrative market in which shady publishers arbitrage traffic by enticing people off of Facebook and onto ad-festooned websites, using stories that are alternately made up, incorrect, exaggerated beyond all relationship to truth, or all three.

“All throughout the election, these fake stories, sometimes papered over with flimsy “parody site” disclosures somewhere in small type, circulated throughout Facebook: The Pope endorses Trump. Hillary Clinton bought $137m in illegal arms. The Clintons bought a $200m house in the Maldives.

“Many got hundreds of thousands, if not millions, of shares, likes, and comments; enough people clicked through to the posts to generate significant profits for their creators.

“The valiant efforts of Snopes and other debunking organizations were insufficient; Facebook’s labyrinthine sharing and privacy settings mean that fact-checks get lost in the shuffle.

“Often, no one would even need to click on and read the story for the headline itself to become a widely distributed talking point, repeated elsewhere online, or, sometimes, in real life.”

While Trump himself claimed throughout his campaign that the media was treating him unfairly, a claim that seems to have resonated with his supporters, many others are, like Reed, largely attributing Clinton’s loss to internet-spread misinformation instead of, say, her messaging.

Algorithms aren’t perfect, but people aren’t either

Not surprisingly, those who appear to be unhappy with the results of the US presidential election seem to be leading the criticism of Facebook and the algorithms that help determine what content is displayed to users.

But that doesn’t mean they don’t have a point. They do.

There is a real debate to be had about the power Google, Facebook and others wield through their algorithms because the potential for abuse and harmful effects is real.

For example, in 2012, Facebook conducted a psychological study by tweaking the number of positive and negative News Feed posts displayed to a random selection of over half a million of its users.

It did not alert them to the fact that they were part of a study or obtain their permission. For obvious reasons, the study, which found that emotions could be spread through social networks, was widely criticized.

But, psychological studies that push ethical boundaries aside, it’s not clear that there’s an easy way to address concerns that algorithms are directing people to potentially bad information.

Some suggest that Facebook and others need to involve humans.

But humans aren’t perfect. If companies like Facebook start relying on human editors to vet the content that circulates on their services, they will arguably cease to be technology platforms and instead come to function as media organizations.

That would open many new cans of worms as humans are themselves vulnerable to bias and manipulation.

For example, during the election cycle, Facebook found itself under scrutiny when former Facebook staffers claimed the world’s largest social network routinely suppressed conservative news from its “trending” news section.

The accusation that the one of the world’s most influential companies was engaging in censorship to favor liberal news sources led CEO Mark Zuckerberg to meet with conservative leaders. The company subsequently decided to rely more heavily on algorithms instead of an editorial team.

Perhaps the most balanced solution to the challenges algorithms present would be to increase transparency as Germany’s Merkel has suggested.

But this too isn’t likely to have the intended effect. If companies like Google and Facebook provided the intricate details about how their algorithms function, the knowledge would almost certainly be used by those seeking to manipulate them for personal gain.

In addition, the average person probably isn’t going to have the interest or technical knowledge required to understand the mechanics of these algorithms even if this information was accessible to them.

Finally, bad information isn’t going away. Human editorial controls – and censorship – might be able to reduce the spread of information deemed inaccurate or harmful, but misinformation and its ill effects existed well before the internet came along.

An inconvenient truth

Founding father Thomas Jefferson wrote, “A properly functioning democracy depends on an informed electorate.”

With over half of adults in the US getting news through social media today according to Pew, there is no doubt that social media plays an increasingly important role in how the electorate is informed.

But Jefferson also wrote of the importance of education and critical thinking:

An enlightened citizenry is indispensable for the proper functioning of a republic. Self-government is not possible unless the citizens are educated sufficiently to enable them to exercise oversight.

The 2016 US presidential election, following the UK’s Brexit vote, has turned algorithms into something of a scapegoat.

And while we should discuss and debate the role they play in all aspects of our society, from how marketing messages are delivered to consumers to how news is disseminated to citizens, we should also be very careful that we don’t blame algorithms for our own shortcomings.

If we do, it will sadly pave the way for an Orwellian web that is less free and more subject to the abuses of concentrated power.