Please do your own research. The information I share is only a catalyst to expanding ones confined consciousness. I have NO desire for anyone to blindly believe or agree with what I share. Seek the truth for yourself and put your own puzzle together that has been presented to you. I'm not here to teach, preach or lead, but rather assist in awakening the consciousness of the collective from its temporary dormancy.
NY Times:How to Talk to Friends and Family Who Share Conspiracy Theories
Fringe movements will persist. Here’s how to help.
By CHARLIE WARZEL
No matter how many shells the Judenpresse Armada fires at us, we “conspiracy tanalysts”not only remain afloat but — as even the Times acknowledges — we continue to grow our ranks with each passing day. As a public service to the bewildered boobs who worship “the paper of record” and Fake News in general, Times scribbler Charlie Warzel has so graciously taken it upon himself to put together a tutorial on “how to help” the viral crazies now popping in every family and circle of friends — and just in time for holiday gatherings with family members too.
Warzel sets forth six strategies that “concerned” normies should use to “help” their crazy loved ones — and he prefaces his advice with the following word of caution about dealing with “conspiracy theorists”–
“Reminder: This advice pertains to friends or relatives with whom you are already close and who are not demonstrating unstable or violent behavior. It’s important to exercise restraint and good judgment in all cases.”
Fuck you Weasel. It’s the Left that’s “unstable” and “violent,” not us. And it is the standard practice of every liar, criminal and psychopath to label his adversary as “crazy.” Let’s review and rebut these six pieces of Warzel’s excrement.
1. Charlie Weasel tells people how to “help” the critical thinkers in their circle. // 2. Crazy Uncle Bob needs a family intervention. // 3. Of course, the Q Anon movement is featured in this story.
1. Ask Where the Information is Coming From.
Rebuttal: Genetic Fallacy Alert! Genetic Fallacy Alert! It doesn’t matter where the data comes from. What matters is whether a thing is true, or whether it is false. Although it is perfectly natural and logical to be skeptical of information coming from a dubious source; a wise man never automatically disregards data based on source alone. Never ask: “Where did you read this?” — Always ask: Is this true — and can it be independently confirmed or debunked?
2. Create Some Cognitive Dissonance — (By acknowledging that certain past conspiracies were real and have been exposed — and then asking why the conspiracy in question has not been exposed by “whistle-blowers”).
There are indeed whistle-blowers usually associated with the big conspiracy of (fill-in-the-blank). But because the press itself is part of the Deep State, you may not always come to know about it. Why would you assume that just because a certain whistle-blower or victim has been ignored by the corrupted corporate media, that he or she does not exist or is not legitimate? Do you blindly worship the Fake News?
3. Debunking is Difficult — (*Because people form emotional attachments to their beliefs.)
That works both ways, Weasel. People, in general, regardless of intelligence or “education,” are pridefully slow when it comes to changing their minds. But what strikes me about you Q Anon / Satanist “debunkers” is that you NEVER even attempt to “debunk” the data — such as the disgusting child-abuse artwork preferred by the Podesta Brothers — or the shocking images and texts posted on social media by the perverted Pizza-gate clique. Are those stomach-turning images and posts that have so repulsed so many millions of people real — or are they not real? Is the Hunter Biden laptop real (public disclosure coming soon), or is it not real? Please answer.
1. All that matters is if a thing is “True” or “False.” Fixating on the “source” of the data point is a diversion. // 2. “There are no whistle-blowers, eh Warzel?” — Vicky Polin on the Oprah Winfrey Show in 1989 reveals how she was raised in a Satanic Jewish cult that sacrificed “breeder” babies (video here). She then ran the Jewish Coalition Against Sexual Abuse/Assault — a support group which, after years of being attacked by “the usual suspects,” finally closed in 2014. // 3. No one has ever even tried to “debunk” Demonrat big-shot Tony Podesta’s favorite works of “art.”
4. Don’t Debate on Facebook — (too confrontational)
Rebuttal: Why not, Mr. Weasel? I’ve seen in-person “debates” between normies and truthers get pretty nasty (been there myself!). Could your advice to avoid Facebook debates have anything to do with the fact that online exchanges allow a truther — who may not be so articulate or well-versed in the subject matter — to instantly drop a link to one of the countless excellent articles and videos out there which provide powerful proofs for “conspiracy theories” ™? Indeed, some of the assigned anti-conspiracy monitors at You Tube were themselves converted by some of the very videos they were tasked with reviewing for censorship! Weasel is trying to prevent naive “debunkers” from fighting in a highly populated arena more favorable to truthers.
5. Mocking and Scolding Won’t Work
Rebuttal: You see that? Behind our backs, he mocks us and tells our normie friends & family that we are essentially insane little children in need of an “intervention” — but then instructs them to conceal their scorn from us — as if we astute “empaths” wouldn’t be able to instinctively sense and be properly offended by the concealed condescension of these sneaky, snarky, back-stabbing mental midgets in our midst! Oh that just does wonders for a family relationship (been there too) eh, Weasel? Typical manipulative Marxist.
6.Know When to Walk Away.
There it is!!! The payoff punch — the truth at last!
Weasel knows bloody damn well that once one has contracted the truth-virus, there’s no going back to the overlapping tyrannical Kingdoms of Libtardia and Normiedom. It is a mental and moral impossibility — notwithstanding these FAKE stories we hear about “reformed” conspiracy cultists who now “have a warning for others.” Hence, Warzel’s previous five pieces of advice were all just part of a grand smoke-screen. What this devious (((devil))) really wants is for people to “walk away” and — as a natural consequence in many cases — alienate the truther from “polite company” and even family gatherings.
In his own words, while quoting a “conspiracy researcher” named – get this – Mike Rothschild:
“If you have legitimate concerns about their health and safety, that is usually a job for professionals. In some cases, it’s important to realize there may be little you can do in the moment, some cautioned. Mr. Rothschild told me: “If you have to, be ready to walk away from them. There comes a point where you may not be able to have that instability in your life.’”
Too late for you Bolshevik bastards to isolate this writer, Mr. Weasel and Mr. Rottenchild. You see, though I will tolerate disagreement — I’ve already distanced myself from anyone and everyone who actually thinks me mad — and with zero regrets (yet with deep and persistent sorrow).
1. (((They))) want normies to stay out of Facebook debates for fear of what they may discover — and then spread. /// 2. “That weirdo thinks 9/11 was an inside job. Ja Ja Ja (oh, sorry) Ha ha ha ha.” // 3. The lonely walk of a truther is never easy.
“A time is coming when men will go mad, and when they see someone who is not mad, they will attack him, saying, ‘You are mad; you are not like us.’”
The Machiavellian quote (sic) that “if you’re going to come at the king, you best not miss,” may be about to bite Mark Zuckerberg and his army of fact-checking mercenaries.
While Zuckerberg may feel omnipotent atop his opaque algo-world but the so-called ‘fact-checkers’ – so expert at shutting down any narrative-conflicting-information (on behalf of, and often at the behest of, the Biden administration) – may have met their match by claiming that one of the world’s oldest and most prestigious medical journals delivered “false information” that “could mislead people.”
As we detailed in early November, The British Medical Journal (BMJ) – a weekly peer-reviewed medical trade journal, published by the trade union the British Medical Association – published a whistle-blower report calling into question data integrity and regulatory oversight issues surrounding Pfizer’s pivotal phase III Covid-19 vaccine trial.
Brook Jackson, a now-fired regional director at Ventavia Research Group, revealed to The BMJ that vaccine trials at several sites in Texas last year had major problems – including falsified data, broke fundamental rules, and were ‘slow’ to report adverse reactions.
When she notified superiors of the issues she found, they fired her.
“A regional director who was employed at the research organisation Ventavia Research Group has told The BMJ that the company falsified data, unblinded patients, employed inadequately trained vaccinators, and was slow to follow up on adverse events reported in Pfizer’s pivotal phase III trial. Staff who conducted quality control checks were overwhelmed by the volume of problems they were finding. After repeatedly notifying Ventavia of these problems, the regional director, Brook Jackson, emailed a complaint to the US Food and Drug Administration (FDA). Ventavia fired her later the same day. Jackson has provided The BMJ with dozens of internal company documents, photos, audio recordings, and emails.” – The BMJ
Very soon after, as the worrisome story went viral, BMJ soon would get a taste of what Facebook, Google, and others are doing to independent media platforms. As TrialSiteNews.com reports, even though BMJ is one of the most prominent medical journals and the information was rigorously peer-reviewed, strange things started occurring.
For example, readers would try to post some of the information on social media such as Facebook to share with their networks. But “some reported being unable to share it [the information].” Moreover, those individuals that were simply sharing this content, peer-reviewed from The BMJ, were warned by Facebook that, “Independent fact-checkers concluded, “This information could mislead people.”
Moreover, they were told, “Those trying to post the article were informed by Facebook that people who repeatedly share ‘false information’ might have their posts moved lower in Facebook’s News Feed.”
In addition, some group administrators received notices from Facebook that the information was “partly false.”
Readers were sent to a “fact check” performed by Lead Stories, a third-party fact-checker.
And so, as possibly the top experts in the world when it comes to medical research information, BMJ has now been forced to fact-check the ‘fact-checkers’.
Having received no response from Facebook or from Lead Stories, after requesting the removal of the “fact checking” label, the BMJ’s editors raise a “wider concern”:
We are aware that The BMJ is not the only high quality information provider to have been affected by the incompetence of Meta’s fact checking regime…
Rather than investing a proportion of Meta’s substantial profits to help ensure the accuracy of medical information shared through social media, you have apparently delegated responsibility to people incompetent in carrying out this crucial task.
Fact checking has been a staple of good journalism for decades.
What has happened in this instance should be of concern to anyone who values and relies on sources such as The BMJ.
In addition to the points raised by BMJ and in the comments below, there is a limit to what independent fact checking can accomplish.
For example, are their fact checkers conducting their own scientific experiments validating claims and outcomes of a scientific paper? Are fact checkers reaching out to sources from a news article and verifying quoted information? When “breaking news” or “scoops” are reported presenting totally new information about the world, how can that be verified against other information that – by virtue of something being new – cannot be verified by other preexisting sources?
If the fact checking process is limited to verification based on other information that is currently available, and if the fact checking process cannot distinguish between factual information and the opinions people hold as a result of that information, the outcome will be an inevitable echo chamber that reinforces currently dominant views or whatever preexisting biases are present.
… and that is exactly what the establishment wants.
We are Fiona Godlee and Kamran Abbasi, editors of The BMJ, one of the world’s oldest and most influential general medical journals. We are writing to raise serious concerns about the “fact checking” being undertaken by third party providers on behalf of Facebook/Meta.
In September, a former employee of Ventavia, a contract research company helping carry out the main Pfizer covid-19 vaccine trial, began providing The BMJ with dozens of internal company documents, photos, audio recordings, and emails. These materials revealed a host of poor clinical trial research practices occurring at Ventavia that could impact data integrity and patient safety. We also discovered that, despite receiving a direct complaint about these problems over a year ago, the FDA did not inspect Ventavia’s trial sites.
The BMJ commissioned an investigative reporter to write up the story for our journal. The article was published on 2 November, following legal review, external peer review and subject to The BMJ’s usual high level editorial oversight and review.
But from November 10, readers began reporting a variety of problems when trying to share our article. Some reported being unable to share it. Many others reported having their posts flagged with a warning about “Missing context … Independent fact-checkers say this information could mislead people.” Those trying to post the article were informed by Facebook that people who repeatedly share “false information” might have their posts moved lower in Facebook’s News Feed. Group administrators where the article was shared received messages from Facebook informing them that such posts were “partly false.”
Readers were directed to a “fact check” performed by a Facebook contractor named Lead Stories.
We find the “fact check” performed by Lead Stories to be inaccurate, incompetent and irresponsible.
It fails to provide any assertions of fact that The BMJ article got wrong
It has a nonsensical title: “Fact Check: The British Medical Journal Did NOT Reveal Disqualifying And Ignored Reports Of Flaws In Pfizer COVID-19 Vaccine Trials”
The first paragraph inaccurately labels The BMJ a “news blog”
It contains a screenshot of our article with a stamp over it stating “Flaws Reviewed,” despite the Lead Stories article not identifying anything false or untrue in The BMJ article
It published the story on its website under a URL that contains the phrase “hoax-alert”
We have contacted Lead Stories, but they refuse to change anything about their article or actions that have led to Facebook flagging our article.
We have also contacted Facebook directly, requesting immediate removal of the “fact checking” label and any link to the Lead Stories article, thereby allowing our readers to freely share the article on your platform.
There is also a wider concern that we wish to raise. We are aware that The BMJ is not the only high quality information provider to have been affected by the incompetence of Meta’s fact checking regime. To give one other example, we would highlight the treatment by Instagram (also owned by Meta) of Cochrane, the international provider of high quality systematic reviews of the medical evidence. Rather than investing a proportion of Meta’s substantial profits to help ensure the accuracy of medical information shared through social media, you have apparently delegated responsibility to people incompetent in carrying out this crucial task. Fact checking has been a staple of good journalism for decades. What has happened in this instance should be of concern to anyone who values and relies on sources such as The BMJ.
We hope you will act swiftly: specifically to correct the error relating to The BMJ’s article and to review the processes that led to the error; and generally to reconsider your investment in and approach to fact checking overall.
Fiona Godlee, editor in chief
Kamran Abbasi, incoming editor in chief
As current and incoming editors in chief, we are responsible for everything The BMJ contains.
Welcome to the Matrix (i.e. the metaverse), where reality is virtual, freedom is only as free as one’s technological overlords allow, and artificial intelligence is slowly rendering humanity unnecessary, inferior and obsolete.
Yet while Zuckerberg’s vision for this digital frontier has been met with a certain degree of skepticism, the truth—as journalist Antonio García Martínez concludes — is that we’re already living in the metaverse.
The metaverse is, in turn, a dystopian meritocracy, where freedom is a conditional construct based on one’s worthiness and compliance.
In a meritocracy, rights are privileges, afforded to those who have earned them. There can be no tolerance for independence or individuality in a meritocracy, where political correctness is formalized, legalized and institutionalized.
Likewise, there can be no true freedom when the ability to express oneself, move about, engage in commerce and function in society is predicated on the extent to which you’re willing to “fit in.”
We are almost at that stage now.
Consider that in our present virtue-signaling world where fascism disguises itself as tolerance, the only way to enjoy even a semblance of freedom is by opting to voluntarily censor yourself, comply, conform and march in lockstep with whatever prevailing views dominate.
Fail to do so — by daring to espouse “dangerous” ideas or support unpopular political movements — and you will find yourself shut out of commerce, employment, and society: Facebook will ban you, Twitter will shut you down, Instagram will de-platform you, and your employer will issue ultimatums that force you to choose between your so-called freedoms and economic survival.
This is exactly how Corporate America plans to groom us for a world in which “we the people” are unthinking, unresistant, slavishly obedient automatons in bondage to a Deep State policed by computer algorithms.
We are living the prequel to The Matrix with each passing day, falling further under the spell of technologically-driven virtual communities, virtual realities and virtual conveniences managed by artificially intelligent machines that are on a fast track to replacing human beings and eventually dominating every aspect of our lives.
Neo is given a choice: to take the red pill, wake up and join the resistance, or take the blue pill, remain asleep and serve as fodder for the powers-that-be.
Most people opt for the blue pill.
In our case, the blue pill — a one-way ticket to a life sentence in an electronic concentration camp — has been honey-coated to hide the bitter aftertaste, sold to us in the name of expediency and delivered by way of blazingly fast Internet, cell phone signals that never drop a call, thermostats that keep us at the perfect temperature without our having to raise a finger, and entertainment that can be simultaneously streamed to our TVs, tablets and cell phones.
Yet we are not merely in thrall with these technologies that were intended to make our lives easier. We have become enslaved by them.
Look around you. Everywhere you turn, people are so addicted to their internet-connected screen devices — smart phones, tablets, computers, televisions — that they can go for hours at a time submerged in a virtual world where human interaction is filtered through the medium of technology.
This is not freedom. This is not even progress.
This is technological tyranny and iron-fisted control delivered by way of the surveillance state, corporate giants such as Google and Facebook, and government spy agencies such as the National Security Agency.
So consumed are we with availing ourselves of all the latest technologies that we have spared barely a thought for the ramifications of our heedless, headlong stumble towards a world in which our abject reliance on internet-connected gadgets and gizmos is grooming us for a future in which freedom is an illusion.
Yet it’s not just freedom that hangs in the balance. Humanity itself is on the line.
If ever Americans find themselves in bondage to technological tyrants, we will have only ourselves to blame for having forged the chains through our own lassitude, laziness and abject reliance on internet-connected gadgets and gizmos that render us wholly irrelevant.
Indeed, we’re fast approaching Philip K. Dick’s vision of the future as depicted in the film Minority Report. There, police agencies apprehend criminals before they can commit a crime, driverless cars populate the highways, and a person’s biometrics are constantly scanned and used to track their movements, target them for advertising, and keep them under perpetual surveillance.
Cue the dawning of the Age of the Internet of Things (IoT), in which internet-connected “things” monitor your home, your health and your habits in order to keep your pantry stocked, your utilities regulated and your life under control and relatively worry-free.
By the end of 2018, “there were an estimated 22 billion internet of things connected devices in use around the world… Forecasts suggest that by 2030 around 50 billion of these IoT devices will be in use around the world, creating a massive web of interconnected devices spanning everything from smartphones to kitchen appliances.”
As the technologies powering these devices have become increasingly sophisticated, they have also become increasingly widespread, encompassing everything from toothbrushes and lightbulbs to cars, smart meters and medical equipment.
Between driverless cars that completely lacking a steering wheel, accelerator, or brake pedal, and smart pills embedded with computer chips, sensors, cameras and robots, we are poised to outpace the imaginations of science fiction writers such as Philip K. Dick and Isaac Asimov. (By the way, there is no such thing as a driverless car. Someone or something will be driving, but it won’t be you.)
These Internet-connected techno gadgets include smart light bulbs that discourage burglars by making your house look occupied, smart thermostats that regulate the temperature of your home based on your activities, and smart doorbells that let you see who is at your front door without leaving the comfort of your couch.
Nest, Google’s suite of smart home products, has been at the forefront of the “connected” industry, with such technologically savvy conveniences as a smart lock that tells your thermostat who is home, what temperatures they like, and when your home is unoccupied; a home phone service system that interacts with your connected devices to “learn when you come and go” and alert you if your kids don’t come home; and a sleep system that will monitor when you fall asleep, when you wake up, and keep the house noises and temperature in a sleep-conducive state.
The aim of these internet-connected devices, as Nest proclaims, is to make “your house a more thoughtful and conscious home.” For example, your car can signal ahead that you’re on your way home, while Hue lights can flash on and off to get your attention if Nest Protect senses something’s wrong. Your coffeemaker, relying on data from fitness and sleep sensors, will brew a stronger pot of coffee for you if you’ve had a restless night.
Yet given the speed and trajectory at which these technologies are developing, it won’t be long before these devices are operating entirely independent of their human creators, which poses a whole new set of worries. As technology expert Nicholas Carr notes:
“As soon as you allow robots, or software programs, to act freely in the world, they’re going to run up against ethically fraught situations and face hard choices that can’t be resolved through statistical models. That will be true of self-driving cars, self-flying drones, and battlefield robots, just as it’s already true, on a lesser scale, with automated vacuum cleaners and lawnmowers.”
Moreover, it’s not just our homes and personal devices that are being reordered and reimagined in this connected age: it’s our workplaces, our health systems, our government, our bodies and our innermost thoughts that are being plugged into a matrix over which we have no real control.
It is expected that by 2030, we will all experience The Internet of Senses (IoS), enabled by Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), 5G, and automation. The Internet of Senses relies on connected technology interacting with our senses of sight, sound, taste, smell, and touch by way of the brain as the user interface.
As journalist Susan Fourtane explains:
“Many predict that by 2030, the lines between thinking and doing will blur. Fifty-nine percent of consumers believe that we will be able to see map routes on VR glasses by simply thinking of a destination… By 2030, technology is set to respond to our thoughts, and even share them with others… Using the brain as an interface could mean the end of keyboards, mice, game controllers, and ultimately user interfaces for any digital device. The user needs to only think about the commands, and they will just happen. Smartphones could even function without touch screens.”
In other words, the IoS will rely on technology being able to access and act on your thoughts.
Orwell’s masterpiece, 1984, portrays a global society of total control in which people are not allowed to have thoughts that in any way disagree with the corporate state. There is no personal freedom, and advanced technology has become the driving force behind a surveillance-driven society. Snitches and cameras are everywhere. And people are subject to the Thought Police, who deal with anyone guilty of thought crimes. The government, or “Party,” is headed by Big Brother, who appears on posters everywhere with the words: “Big Brother is watching you.”
The corporate media narrative that unvaccinated people are filling up the hospitals and dying from COVID is quickly falling apart, perhaps faster than they even expected.
WXYZ TV Channel 7 in Detroit asked their viewers on their Facebook Page last Friday to direct message them if they lost a loved one due to COVID-19 if they refused to get one of the COVID-19 vaccines.
This is a clear indication that they are getting desperate to find these stories, and are having a difficult time finding them.
I don’t know if they got any such stories through direct messaging, but the post on their Facebook Page, as of the time of publication today, had received over 182,000 comments, and they seem to be all comments of those who have lost loved ones after receiving a COVID shot, and comments asking them why they are not covering that story.
I paged through many dozens of the comments, and did not see a single one stating that they lost someone to COVID after refusing a COVID-19 shot.
Here are a few screen shots of the comments that are representative of what people are posting, in case they do take this down:
People who have been silenced and censored on Facebook and other Big Tech platforms took advantage of the opportunity to share their stories instead. It is amazing that Facebook left these up, but after so many had commented, it would probably have been an even bigger story if they had taken down the post and comments.
I wonder what WXYZ will do now? Will they do what most corporate media companies do, fueled by almost unlimited resources from their billionaire Wall Street owners who are almost all connected to the pharmaceutical industry, and just go out and hire actors instead to do the story and make them up?
The democratization of information-sharing was going to give rise to a public consciousness that is emancipated from the domination of plutocratic narrative control, thereby opening up the possibility of revolutionary change to our society’s corrupt systems.
But it never happened. Internet use has become commonplace around the world and humanity is able to network and share information like never before, yet we remain firmly under the thumb of the same power structures we’ve been ruled by for generations, both politically and psychologically. Even the dominant media institutions are somehow still the same.
How is it possible that those same imperialist oligarchic institutions are still controlling the way most people think about their world?
The answer is algorithm manipulation.
Last month a very informative interview saw the CEO of YouTube, which is owned by Google, candidly discussing the way the platform uses algorithms to elevate mainstream news outlets and suppress independent content.
At the World Economic Forum’s 2021 Global Technology Governance Summit, YouTube CEO Susan Wojcicki told Atlantic CEO Nicholas Thompson that while the platform still allows arts and entertainment videos an equal shot at going viral and getting lots of views and subscribers, on important areas like news media it artificially elevates “authoritative sources”.
“What we’ve done is really fine-tune our algorithms to be able to make sure that we are still giving the new creators the ability to be found when it comes to music or humor or something funny,” Wojcicki said. “But when we’re dealing with sensitive areas, we really need to take a different approach.”
Wojcicki said in addition to banning content deemed harmful, YouTube has also created a category labeled “borderline content” which it algorithmically de-boosts so that it won’t show up as a recommended video to viewers who are interested in that topic:
“When we deal with information, we want to make sure that the sources that we’re recommending are authoritative news, medical science, etcetera. And we also have created a category of more borderline content where sometimes we’ll see people looking at content that’s lower quality and borderline. And so we want to be careful about not over-recommending that. So that’s a content that stays on the platform but is not something that we’re going to recommend. And so our algorithms have definitely evolved in terms of handling all these different content types.”
Progressive commentator Kyle Kulinski has a good video out reacting to Wojcicki’s comments, saying he believes his (entirely harmless) channel has been grouped in the “borderline” category because his views and new subscribers suddenly took a dramatic and inexplicable plunge. Kulinski reports that overnight he went from getting tens of thousands of new subscriptions per month to maybe a thousand.
“People went to YouTube to escape the mainstream nonsense that they see on cable news and on TV, and now YouTube just wants to become cable news and TV,” Kulinski says. “People are coming here to escape that and you’re gonna force-feed them the stuff they’re escaping like CNN and MSNBC and Fox News.”
Google itself also uses algorithms to artificially boost empire media in its searches. In 2017 World Socialist Website (WSWS) began documenting the fact that it, along with other leftist and antiwar outlets, had suddenly experienced a dramatic drop in traffic from Google searches.
In 2019 the Wall Street Journal confirmed WSWS claims, reporting that “Despite publicly denying doing so, Google keeps blacklists to remove certain sites or prevent others from surfacing in certain types of results.” In 2020 the CEO of Google’s parent company Alphabet admitted to censoring WSWS at a Senate hearing in response to one senator’s suggestion that Google only censors right wing content.
All the algorithm stacking by the dominant news distribution giants Google and Facebook also ensures that mainstream platforms and reporters will have far more followers than indie media on platforms like Twitter, since an article that has been artificially amplified will receive far more views and therefore far more clicks on their social media information.
Mass media employees tend to clique up and amplify each other on Twitter, further exacerbating the divide. Meanwhile left and antiwar voices, including myself, have been complaining for years that Twitter artificially throttles their follower count.
If not for these deliberate acts of sabotage and manipulation by Silicon Valley megacorporations, the mainstream media which have deceived us into war after war and which manufacture consent for an oppressive status quo would have been replaced by independent media years ago. These tech giants are the life support system of corporate media propaganda.
This week top-secret meetings are taking place between the top communications firms in the US.
Big Tech, Mainstream Media (Big Media), and the intelligence community are gathering to strategize on how to consolidate their power over the information being force-fed to the American people.
top secret meeting of mainstream media, big tech, and intelligence heads is taking place this week
Mark Dice shares the following:
Every time people talk about the Mainstream media conglomerates secretly collaborating with each other. Visions of smoke-filled rooms and shadowy figures wearing expensive suits sitting around a table come to mind.
Well, this may be an exaggerated expectation at the behind-the-scenes look at the issue, but it isn’t all that far from the truth.
Every July since 1983 a small group of media moguls, tech titans, investors, politicians, and intelligence agency insiders, all gather in the small town of Sun Valley, Idaho, for a week of meetings to develop the consensus regarding policies for Mainstream media, social media, and emerging communications technology.
It’s basically like the Bilderberg meeting for media and since tech companies like Facebook, Twitter, Apple, and YouTube have become major players in the media industry, they all come together each year in Sun Valley trying to make sure no emerging platforms can threaten their power…
Watch the entire video below:
This week top-secret meetings are taking place between the top communications firms in the US.
Mark is one of the great voices on YouTube that was targeted and censored since 2016. He was too effective. The above video is an example of his fabulous work.
Americans need to break free of these information-controlling entities. New media is the answer.
Facebook users have begun to receive creepy messages warning them that they ‘may have been exposed to extremist content’ and asking if they need support, as well as asking them to report anyone they know who ‘may be becoming an extremist’.
The warnings began popping up Thursday and have little indication of what the platform considers to be ‘extremist content’:
Encouraging people to turn in their friends, relatives and neighbors for wrong-think, where have we heard about this before?
On closer inspection, the warnings further link to a group that calls itself Life After Hate.
The About description on their website reads:
Our Mission – Life After Hate is committed to helping people leave the violent far-right to connect with humanity and lead compassionate lives.
Our Programs – Our primary goal is to interrupt violence committed in the name of ideological or religious beliefs. We do this through education, interventions, academic research, and outreach.
Note that there is still no definition of what ‘violent far-right’ means, and there is no mention of any far-left hate groups, of which there are plenty.
A ‘fact sheet’ posted to Life After Hate’s website also proclaims that “far right extremism and white supremacy are the greatest domestic terror threats facing the United States.” which is BULLSHIT
So, ok hate only exists on the right. And surely every woke leftist this message is sent to won’t use it against the conservative neighbour or colleague who merely disagrees with them.
You can see where this is heading.
Facebook responded to questions about the alerts Thursday, issuing a statement that says “This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,”
“We are partnering with NGOs and academic experts in this space and hope to have more to share in the future,” the Facebook statement also read.
This group appears to be yet another like the Southern Poverty Law Center, a far left political entity masquerading as a bipartisan organisation with the express goal of silencing anyone who does not adhere to their warped outlook.
As Pink Floyd legend Roger Waters recently noted, Facebook and Zuckerberg have an insatiable desire to “insidiously take over absolutely everything” and wipe out anyone or anything they cannot control.
What I dont undesrtand is WHY people are still in Facebook, Twitter, Etc,
The only way this Big Tech companies will learn, is by people getting out of it. “Lets Cancel Social Media”
The censorship of information is at an all time high, but do people really recognize the extent to which it has been and is being carried out? A recent article published in the British Medical Journal by journalist Laurie Clarke has highlighted the fact that Facebook has already removed at least 16 million pieces of content from its platform and added warnings to approximately 167 million others.
YouTube has removed nearly 1 million videos related to, according to them, “dangerous or misleading covid-19 medical information.”
Being an independent media outlet, Collective Evolution has experienced this censorship first hand. We’ve also been in touch with and witnessed many doctors and world renowned scientists be subjected to the same type of treatment from these social media organizations.
I did the same with Dr. Carl Heneghan, a professor of evidence based medicine from Oxford and an emergency GP who wrote an article regarding the efficacy of facemasks in stopping the spread of COVID.
His article was not removed, but a label was added to it by Facebook saying it was ‘fake information.’ There are many more examples.
Clarke’s article says, with regards to posts that have been removed and labelled, that,
“while a portion of that content is likely to be wilfully wrongheaded or vindictively misleading, the pandemic is littered with examples of scientific opinion that have been caught in the dragnet.”
This is true, take for example the ‘lab origins of COVID debate.’ Early on in the pandemic you were not even allowed to mention that COVID may have originated in a lab, and if you did, you were punished for doing so.
Independent media platforms were demonetized and subjected to changes in algorithms. Now, all of a sudden, the mainstream media is discussing it as a legitimate possibility.
It makes no sense.
This underscores the difficulty of defining scientific truth, prompting the bigger question of whether social media platforms such as Facebook, Twitter, Instagram and YouTube should be tasked with this at all…”
I think it’s quite dangerous for scientific content to be labelled as misinformation, just because of the way people might perceive that,” says Sander van der Linden, professor of social psychology in society at Cambridge University, UK.
“Even though it might fit under a definition (of misinformation) in a very technical sense, I’m not sure if that’s the right way to describe it more generally because it could lead to greater politicisation of science, which is undesirable.” – Clarke
This type of “politicization of science” is exactly what’s happened during this pandemic.
Science is being suppressed for political and financial gain. Covid-19 has unleashed state corruption on a grand scale, and it is harmful to public health. Politicians and industry are responsible for this opportunistic embezzlement. So too are scientists and health experts. The pandemic has revealed how the medical-political complex can be manipulated in an emergency — a time when it is even more important to safeguard science. – Kamran Abbas is a doctor, executive editor of the British Medical Journal, and the editor of the Bulletin of the World Health Organization. (source)
NSA whistleblower Edward Snowden offered his thoughts on the censorship we’ve been seeing during this pandemic in November of last year stating the following,
In secret, these companies had all agreed to work with the U.S. Government far beyond what the law required of them, and that’s what we’re seeing with this new censorship push is really a new direction in the same dynamic.
These companies are not obligated by the law to do almost any of what they’re actually doing but they’re going above and beyond, to, in many cases, to increase the depth of their relationship (with the government) and the government’s willingness to avoid trying to regulate them in the context of their desired activities, which is ultimately to dominate the conversation and information space of global society in different ways… They’re trying to make you change your behaviour.
If you’re not comfortable letting the government determine the boundaries of appropriate political speech, why are you begging Mark Zuckerberg to do it?
I think the reality here is…it’s not really about freedom of speech, and it’s not really about protecting people from harm…I think what you see is the internet has become the de facto means of mass communication.
That represents influence which represents power, and what we see is we see a whole number of different tribes basically squabbling to try to gain control over this instrument of power.
What we see is an increasing tendency to silence journalists who say things that are in the minority.
It makes you wonder, is this “fact-checking” actually about fact checking? Or is something else going on here?
Below is a breakdown from Clarke’s article illustrating how fact checking works and what the problem is with following the science.
Since we have reported this many times over the last 5 years, we decided to let our readers hear it from someone else for a change as it’s truly quite vindicating to see more investigators coming to these conclusions.
How Fact Checking Works
The past decade has seen an arms race between users who peddle disinformation (intentionally designed to mislead) or unwittingly share misinformation (which users don’t realise is false) and the social media platforms that find themselves charged with policing it, whether they want to or not.1
When The BMJ questioned Facebook, Twitter, and YouTube (which is owned by Google) they all highlighted their efforts to remove potentially harmful content and to direct users towards authoritative sources of information on covid-19 and vaccines, including the World Health Organization and the US Centers for Disease Control and Prevention.
Although their moderation policies differ slightly, the platforms generally remove or reduce the circulation of content that disputes information given by health authorities such as WHO and the CDC or spreads false health claims that are considered harmful, including incorrect information about the dangers of vaccines.
But the pandemic has seen a shifting patchwork of criteria employed by these companies to define the boundaries of misinformation.
This has led to some striking U turns: at the beginning of the pandemic, posts saying that masks helped to prevent the spread of covid-19 were labelled “false”; now it’s the opposite, reflecting the changing nature of the academic debate and official recommendations.
Twitter manages its fact checking internally. But Facebook and YouTube rely on partnerships with third party fact checkers, convened under the umbrella of the International Fact-Checking Network — a non-partisan body that certifies other fact checkers, run by the Poynter Institute for Media Studies, a non-profit journalism school in St Petersburg, Florida.
Poynter’s top donors include the Charles Koch Institute (a public policy research organisation), the National Endowment for Democracy (a US government agency), and the Omidyar Network (a “philanthropic investment firm”), as well as Google and Facebook.
Poynter also owns the Tampa Bay Times newspaper and the high profile fact checker PolitiFact. The Poynter Institute declined The BMJ’s invitation to comment for this article.
For scientific and medical content the International Fact-Checking Network involves little known outfits such as SciCheck, Metafact, and Science Feedback.
Health Feedback, a subsidiary of Science Feedback, handpicks scientists to deliver its verdict.
Using this method, it labelled as “misleading” a Wall Street Journal opinion article2 predicting that the US would have herd immunity by April 2021, written by Marty Makary, professor of health policy and management at John Hopkins University in Baltimore, Maryland.
This prompted the newspaper to issue a rebuttal headlined “Fact checking Facebook’s fact checkers,” arguing that the rating was “counter-opinion masquerading as fact checking.”3
Makary hadn’t presented his argument as a factual claim, the article said, but had made a projection based on his analysis of the evidence.
A spokesperson for Science Feedback tells The BMJ that, to verify claims, it selects scientists on the basis of “their expertise in the field of the claim/article.”
They explain, “Science Feedback editors usually start by searching the relevant academic literature and identifying scientists who have authored articles on related topics or have the necessary expertise to assess the content.”
The organisation then either asks the selected scientists to weigh in directly or collects claims that they’ve made in the media or on social media to reach a verdict.
In the case of Makary’s article it identified 20 relevant scientists and received feedback from three.
“Follow The Science”
The contentious nature of these decisions is partly down to how social media platforms define the slippery concepts of misinformation versus disinformation.
This decision relies on the idea of a scientific consensus. But some scientists say that this smothers heterogeneous opinions, problematically reinforcing a misconception that science is a monolith.
This is encapsulated by what’s become a pandemic slogan:
“Follow the science.” David Spiegelhalter, chair of the Winton Centre for Risk and Evidence Communication at Cambridge University, calls this “absolutely awful,” saying that behind closed doors scientists spend the whole time arguing and deeply disagreeing on some fairly fundamental things.
“Science is not out in front telling you what to do; it shouldn’t be. I view it much more as walking along beside you muttering to itself, making comments about what it’s seeing and making some tentative suggestions about what might happen if you take a particular path, but it’s not in charge.”
The term “misinformation” could itself contribute to a flattening of the scientific debate. Martin Kulldorff, professor of medicine at Harvard Medical School in Boston, Massachusetts, has been criticised for his views on lockdown, which tack closely to his native Sweden’s more relaxed strategy.4
He says that scientists who voice unorthodox opinions during the pandemic are worried about facing “various forms of slander or censoring … they say certain things but not other things, because they feel that will be censored by Twitter or YouTube or Facebook.”
This worry is compounded by the fear that it may affect grant funding and the ability to publish scientific papers, he tells The BMJ.
The binary idea that scientific assertions are either correct or incorrect has fed into the divisiveness that has characterised the pandemic. Samantha Vanderslott, a health sociologist at the University of Oxford, UK, told Nature, “Calling out fake stories can raise your profile.”
In the same article Giovanni Zagni, director of the Italian fact checking website Facta, noted that “you can build a career” on the basis of becoming “a well respected voice that fights against bad information.”5
But this has fed a perverse incentive for scientists to label each other’s positions misinformation or disinformation.6 Van der Linden likens this to how the term “fake news” was weaponised by Donald Trump to silence his critics.
He says, “I think you see a bit of the same with the term ‘misinformation,’ when there’s science that you don’t agree with and you label it as misinformation.”
Health Feedback’s website says that it won’t select scientists to verify claims if they’ve undermined their credibility by “propagating misinformation, whether intentionally or not.”
In practice, this could create a Kafkaesque situation where scientists are precluded from offering their opinion as part of the fact checking process if they expressed an opinion that Facebook labelled misinformation.
Strengthening the echo chamber effect is the fact that Health Feedback sometimes verifies claims by looking at what scientists have said on Twitter or in the media.
Van der Linden says that it’s important for people to understand that in the scientific domain “there’s uncertainty, there’s debate, and it’s about the accumulation of insights over time and revising our opinions as we go along.”
Healthy debate helps to separate the wheat from the chaff. Jevin West, associate professor in the Information School at the University of Washington in Seattle, says that social media platforms should therefore be “extra careful when it comes to debates involving science.”
“The institution of science has developed these norms and behaviour to be self-corrective. So, for [social media platforms] to step into that conversation, I think it’s problematic.”
Experts who spoke to The BMJ emphasised the near impossibility of distinguishing between a minority scientific opinion and an opinion that’s objectively incorrect (misinformation).
Spiegelhalter says that this would constitute a difficult “legalistic judgment about what a reasonable scientific opinion would be … I’ve got my own criteria that I use to decide whether I think something is misleading, but I find it very difficult to codify.”
Other scientists worry that, if this approach to scientific misinformation outlives the pandemic, the scientific debate could become worryingly subject to commercial imperatives.
Vinay Prasad, associate professor at the University of California San Francisco, argued on the MedPage Today website:
“The risk is that the myriad players in biomedicine, from large to small biopharmaceutical and [medical] device firms, will take their concerns to social media and journal companies. On a topic like cancer drugs, a tiny handful of folks critical of a new drug approval may be outnumbered 10:1 by key opinion leaders who work with the company.”7
Thus the majority who speak loudest, most visibly, and with the largest number online, may be judged “correct” by the public—and, as the saying goes, history is written by the victors.
Social media companies are still experimenting with the new raft of measures introduced since last year and may adapt their approach.
Van der Linden says that the talks he’s had with Facebook have focused on how the platform could help foster an appreciation of how science works, “to actually direct people to content that educates them about the scientific process, rather than labelling something as true or false.”
This debate is playing out against a wider ideological struggle, where the ideal of “truth” is increasingly placed above “healthy debate.”
“To remove things in general, I think is a bad idea. Because even if something is wrong, if you remove it there’s no opportunity to discuss it.” For instance, although he favors vaccination in general, people with fears or doubts about the vaccines used should not be silenced in online spaces, he says.
“If we don’t have an open debate within science, then that will have enormous consequences for science and society.”
There are concerns that this approach could ultimately undermine trust in public health. In the US, says West, trust in the government and media is falling.
He explains, “Science is still one of the more trusted institutions, but if you start tagging and shutting down conversation within science, to me that’s even worse than the actual posting of these individual articles.”
Facebook’s growing role in the ever-expanding surveillance and “pre-crime” apparatus of the national security state demands new scrutiny of the company’s origins and its products as they relate to a former, controversial DARPA-run surveillance program that was essentially analogous to what is currently the world’s largest social network.
In mid-February, Daniel Baker, a US veteran described by the media as “anti-Trump, anti-government, anti-white supremacists, and anti-police,” was charged by a Florida grand jury with two counts of “transmitting a communication in interstate commerce containing a threat to kidnap or injure.”
The communication in question had been posted by Baker on Facebook, where he had created an event page to organize an armed counter-rally to one planned by Donald Trump supporters at the Florida capital of Tallahassee on January 6. “If you are afraid to die fighting the enemy, then stay in bed and live. Call all of your friends and Rise Up!,” Baker had written on his Facebook event page.
Baker’s case is notable as it is one of the first “precrime” arrests based entirely on social media posts—the logical conclusion of the Trump administration’s, and now Biden administration’s, push to normalize arresting individuals for online posts to prevent violent acts before they can happen. From the increasing sophistication of US intelligence/military contractor Palantir’s predictive policing programs to the formal announcement of the Justice Department’s Disruption and Early Engagement Program in 2019 to Biden’s first budget, which contains $111 million for pursuing and managing “increasing domestic terrorism caseloads,” the steady advance toward a precrime-centered “war on domestic terror” has been notable under every post-9/11 presidential administration.
This new so-called war on domestic terror has actually resulted in many of these types of posts on Facebook. And, while Facebook has long sought to portray itself as a “town square” that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world’s largest social network was always intended to act as a surveillance tool to identify and target domestic dissent.
Part 1 of this two-part series on Facebook and the US national-security state explores the social media network’s origins and the timing and nature of its rise as it relates to a controversial military program that was shut down the same day that Facebook launched. The program, known as LifeLog, was one of several controversial post-9/11 surveillance programs pursued by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) that threatened to destroy privacy and civil liberties in the United States while also seeking to harvest data for producing “humanized” artificial intelligence (AI).
As this report will show, Facebook is not the only Silicon Valley giant whose origins coincide closely with this same series of DARPA initiatives and whose current activities are providing both the engine and the fuel for a hi-tech war on domestic dissent.
DARPA’s Data Mining for “National Security” and to “Humanize” AI
In the aftermath of the September 11 attacks, DARPA, in close collaboration with the US intelligence community (specifically the CIA), began developing a “precrime” approach to combatting terrorism known as Total Information Awareness or TIA. The purpose of TIA was to develop an “all-seeing” military-surveillance apparatus. The official logic behind TIA was that invasive surveillance of the entire US population was necessary to prevent terrorist attacks, bio-terrorism events, and even naturally occurring disease outbreaks.
The architect of TIA, and the man who led it during its relatively brief existence, was John Poindexter, best known for being Ronald Reagan’s National Security Advisor during the Iran-Contra affair and for being convicted of five felonies in relation to that scandal. A less well-known activity of Iran-Contra figures like Poindexter and Oliver North was their development of the Main Core database to be used in “continuity of government” protocols. Main Core was used to compile a list of US dissidents and “potential troublemakers” to be dealt with if the COG protocols were ever invoked. These protocols could be invoked for a variety of reasons, including widespread public opposition to a US military intervention abroad, widespread internal dissent, or a vaguely defined moment of “national crisis” or “time of panic.” Americans were not informed if their name was placed on the list, and a person could be added to the list for merely having attended a protest in the past, for failing to pay taxes, or for other, “often trivial,” behaviors deemed “unfriendly” by its architects in the Reagan administration.
In light of this, it was no exaggeration when New York Times columnist William Safire remarked that, with TIA, “Poindexter is now realizing his twenty-year dream: getting the ‘data-mining’ power to snoop on every public and private act of every American.”
The TIA program met with considerable citizen outrage after it was revealed to the public in early 2003. TIA’s critics included the American Civil Liberties Union, which claimed that the surveillance effort would “kill privacy in America” because “every aspect of our lives would be catalogued,” while several mainstream media outlets warned that TIA was “fighting terror by terrifying US citizens.” As a result of the pressure, DARPA changed the program’s name to Terrorist Information Awareness to make it sound less like a national-security panopticon and more like a program aiming specifically at terrorists in the post-9/11 era.
The TIA projects were not actually closed down, however, with most moved to the classified portfolios of the Pentagon and US intelligence community. Some became intelligence funded and guided private-sector endeavors, such as Peter Thiel’s Palantir, while others resurfaced years later under the guise of combatting the COVID-19 crisis.
Soon after TIA was initiated, a similar DARPA program was taking shape under the direction of a close friend of Poindexter’s, DARPA program manager Douglas Gage. Gage’s project, LifeLog, sought to “build a database tracking a person’s entire existence” that included an individual’s relationships and communications (phone calls, mail, etc.), their media-consumption habits, their purchases, and much more in order to build a digital record of “everything an individual says, sees, or does.” LifeLog would then take this unstructured data and organize it into “discreet episodes” or snapshots while also “mapping out relationships, memories, events and experiences.”
LifeLog, per Gage and supporters of the program, would create a permanent and searchable electronic diary of a person’s entire life, which DARPA argued could be used to create next-generation “digital assistants” and offer users a “near-perfect digital memory.” Gage insisted, even after the program was shut down, that individuals would have had “complete control of their own data-collection efforts” as they could “decide when to turn the sensors on or off and decide who will share the data.” In the years since then, analogous promises of user control have been made by the tech giants of Silicon Valley, only to be broken repeatedly for profit and to feed the government’s domestic-surveillance apparatus.
The information that LifeLog gleaned from an individual’s every interaction with technology would be combined with information obtained from a GPS transmitter that tracked and documented the person’s location, audio-visual sensors that recorded what the person saw and said, as well as biomedical monitors that gauged the person’s health. Like TIA, LifeLog was promoted by DARPA as potentially supporting “medical research and the early detection of an emerging epidemic.”
Critics in mainstream media outlets and elsewhere were quick to point out that the program would inevitably be used to build profiles on dissidents as well as suspected terrorists. Combined with TIA’s surveillance of individuals at multiple levels, LifeLog went farther by “adding physical information (like how we feel) and media data (like what we read) to this transactional data.” One critic, Lee Tien of the Electronic Frontier Foundation, warned at the time that the programs that DARPA was pursuing, including LifeLog, “have obvious, easy paths to Homeland Security deployments.”
At the time, DARPA publicly insisted that LifeLog and TIA were not connected, despite their obvious parallels, and that LifeLog would not be used for “clandestine surveillance.” However, DARPA’s own documentation on LifeLog noted that the project “will be able . . . to infer the user’s routines, habits and relationships with other people, organizations, places and objects, and to exploit these patterns to ease its task,” which acknowledged its potential use as a tool of mass surveillance.
In addition to the ability to profile potential enemies of the state, LifeLog had another goal that was arguably more important to the national-security state and its academic partners—the “humanization” and advancement of artificial intelligence. In late 2002, just months prior to announcing the existence of LifeLog, DARPA released a strategy document detailing development of artificial intelligence by feeding it with massive floods of data from various sources.
The post-9/11 military-surveillance projects—LifeLog and TIA being only two of them—offered quantities of data that had previously been unthinkable to obtain and that could potentially hold the key to achieving the hypothesized “technological singularity.” The 2002 DARPA document even discusses DARPA’s effort to create a brain-machine interface that would feed human thoughts directly into machines to advance AI by keeping it constantly awash in freshly mined data.
One of the projects outlined by DARPA, the Cognitive Computing Initiative, sought to develop sophisticated artificial intelligence through the creation of an “enduring personalized cognitive assistant,” later termed the Perceptive Assistant that Learns, or PAL. PAL, from the very beginning was tied to LifeLog, which was originally intended to result in granting an AI “assistant” human-like decision-making and comprehension abilities by spinning masses of unstructured data into narrative format.
The would-be main researchers for the LifeLog project also reflect the program’s end goal of creating humanized AI. For instance, Howard Shrobe at the MIT Artificial Intelligence Laboratory and his team at the time were set to be intimately involved in LifeLog. Shrobe had previously worked for DARPA on the “evolutionary design of complex software” before becoming associate director of the AI Lab at MIT and has devoted his lengthy career to building “cognitive-style AI.” In the years after LifeLog was cancelled, he again worked for DARPA as well as on intelligence community–related AI research projects. In addition, the AI Lab at MIT was intimately connected with the 1980s corporation and DARPA contractor called Thinking Machines, which was founded by and/or employed many of the lab’s luminaries—including Danny Hillis, Marvin Minsky, and Eric Lander—and sought to build AI supercomputers capable of human-like thought. All three of these individuals were later revealed to be close associates of and/or sponsored by the intelligence-linked pedophile Jeffrey Epstein, who also generously donated to MIT as an institution and was a leading funder of and advocate for transhumanist-related scientific research.
Soon after the LifeLog program was shuttered, critics worried that, like TIA, it would continue under a different name. For example, Lee Tien of the Electronic Frontier Foundation told VICE at the time of LifeLog’s cancellation, “It would not surprise me to learn that the government continued to fund research that pushed this area forward without calling it LifeLog.”
Along with its critics, one of the would-be researchers working on LifeLog, MIT’s David Karger, was also certain that the DARPA project would continue in a repackaged form. He told Wired that “I am sure such research will continue to be funded under some other title . . . I can’t imagine DARPA ‘dropping out’ of a such a key research area.”
The answer to these speculations appears to lie with the company that launched the exact same day that LifeLog was shuttered by the Pentagon: Facebook.
Thiel Information Awareness
After considerable controversy and criticism, in late 2003, TIA was shut down and defunded by Congress, just months after it was launched. It was only later revealed that that TIA was never actually shut down, with its various programs having been covertly divided up among the web of military and intelligence agencies that make up the US national-security state. Some of it was privatized.
The same month that TIA was pressured to change its name after growing backlash, Peter Thiel incorporated Palantir, which was, incidentally, developing the core panopticon software that TIA had hoped to wield. Soon after Palantir’s incorporation in 2003, Richard Perle, a notorious neoconservative from the Reagan and Bush administrations and an architect of the Iraq War, called TIA’s Poindexter and said he wanted to introduce him to Thiel and his associate Alex Karp, now Palantir’s CEO. According to a report in New York magazine, Poindexter “was precisely the person” whom Thiel and Karp wanted to meet, mainly because “their new company was similar in ambition to what Poindexter had tried to create at the Pentagon,” that is, TIA. During that meeting, Thiel and Karp sought “to pick the brain of the man now widely viewed as the godfather of modern surveillance.”
Soon after Palantir’s incorporation, though the exact timing and details of the investment remain hidden from the public, the CIA’s In-Q-Tel became the company’s first backer, aside from Thiel himself, giving it an estimated $2 million. In-Q-Tel’s stake in Palantir would not be publicly reported until mid-2006.
The money was certainly useful. In addition, Alex Karp told the New York Times in October 2020, “the real value of the In-Q-Tel investment was that it gave Palantir access to the CIA analysts who were its intended clients.” A key figure in the making of In-Q-Tel investments during this period, including the investment in Palantir, was the CIA’s chief information officer, Alan Wade, who had been the intelligence community’s point man for Total Information Awareness. Wade had previously cofounded the post-9/11 Homeland Security software contractor Chiliad alongside Christine Maxwell, sister of Ghislaine Maxwell and daughter of Iran-Contra figure, intelligence operative, and media baron Robert Maxwell.
After the In-Q-Tel investment, the CIA would be Palantir’s only client until 2008. During that period, Palantir’s two top engineers—Aki Jain and Stephen Cohen—traveled to CIA headquarters at Langley, Virginia, every two weeks. Jain recalls making at least two hundred trips to CIA headquarters between 2005 and 2009. During those regular visits, CIA analysts “would test [Palantir’s software] out and offer feedback, and then Cohen and Jain would fly back to California to tweak it.” As with In-Q-Tel’s decision to invest in Palantir, the CIA’s chief information officer during this time remained one of TIA’s architects. Alan Wade played a key role in many of these meetings and subsequently in the “tweaking” of Palantir’s products.
Today, Palantir’s products are used for mass surveillance, predictive policing, and other disconcerting policies of the US national-security state. A telling example is Palantir’s sizable involvement in the new Health and Human Services–run wastewater surveillance program that is quietly spreading across the United States. As noted in a previous Unlimited Hangout report, that system is the resurrection of a TIA program called Biosurveillance. It is feeding all its data into the Palantir-managed and secretive HHS Protect data platform. The decision to turn controversial DARPA-led programs into a private ventures, however, was not limited to Thiel’s Palantir.
The Rise of Facebook
The shuttering of TIA at DARPA had an impact on several related programs, which were also dismantled in the wake of public outrage over DARPA’s post-9/11 programs. One of these programs was LifeLog. As news of the program spread through the media, many of the same vocal critics who had attacked TIA went after LifeLog with similar zeal, with Steven Aftergood of the Federation of American Scientists telling Wired at the time that “LifeLog has the potential to become something like ‘TIA cubed.’” LifeLog being viewed as something that would prove even worse than the recently cancelled TIA had a clear effect on DARPA, which had just seen both TIA and another related program cancelled after considerable backlash from the public and the press.
The firestorm of criticism of LifeLog took its program manager, Doug Gage, by surprise, and Gage has continued to assert that the program’s critics “completely mischaracterized” the goals and ambitions of the project. Despite Gage’s protests and those of LifeLog’s would-be researchers and other supporters, the project was publicly nixed on February 4, 2004. DARPA never provided an explanation for its quiet move to shutter LifeLog, with a spokesperson stating only that it was related to “a change in priorities” for the agency. On DARPA director Tony Tether’s decision to kill LifeLog, Gage later told VICE, “I think he had been burnt so badly with TIA that he didn’t want to deal with any further controversy with LifeLog. The death of LifeLog was collateral damage tied to the death of TIA.”
Fortuitously for those supporting the goals and ambitions of LifeLog, a company that turned out to be its private-sector analogue was born on the same day that LifeLog’s cancellation was announced. On February 4, 2004, what is now the world’s largest social network, Facebook, launched its website and quickly rose to the top of the social media roost, leaving other social media companies of the era in the dust.
A few months into Facebook’s launch, in June 2004, Facebook cofounders Mark Zuckerberg and Dustin Moskovitz brought Sean Parker onto Facebook’s executive team. Parker, previously known for cofounding Napster, later connected Facebook with its first outside investor, Peter Thiel. As discussed, Thiel, at that time, in coordination with the CIA, was actively trying to resurrect controversial DARPA programs that had been dismantled the previous year. Notably, Sean Parker, who became Facebook’s first president, also had a history with the CIA, which recruited him at the age of sixteen soon after he had been busted by the FBI for hacking corporate and military databases. Thanks to Parker, in September 2004, Thiel formally acquired $500,000 worth of Facebook shares and was added its board. Parker maintained close ties to Facebook as well as to Thiel, with Parker being hired as a managing partner of Thiel’s Founders Fund in 2006.
Thiel and Facebook cofounder Mosokvitz became involved outside of the social network long after Facebook’s rise to prominence, with Thiel’s Founder Fund becoming a significant investor in Moskovitz’s company Asana in 2012. Thiel’s longstanding symbiotic relationship with Facebook cofounders extends to his company Palantir, as the data that Facebook users make public invariably winds up in Palantir’s databases and helps drive the surveillance engine Palantir runs for a handful of US police departments, the military, and the intelligence community. In the case of the Facebook–Cambridge Analytica data scandal, Palantir was also involved in utilizing Facebook data to benefit the 2016 Donald Trump presidential campaign.
Today, as recent arrests such as that of Daniel Baker have indicated, Facebook data is slated to help power the coming “war on domestic terror,” given that information shared on the platform is being used in “precrime” capture of US citizens, domestically. In light of this, it is worth dwelling on the point that Thiel’s exertions to resurrect the main aspects of TIA as his own private company coincided with his becoming the first outside investor in what was essentially the analogue of another DARPA program deeply intertwined with TIA.
Facebook, a Front
Because of the coincidence that Facebook launched the same day that LifeLog was shut down, there has been recent speculation that Zuckerberg began and launched the project with Moskovitz, Saverin, and others through some sort of behind-the-scenes coordination with DARPA or another organ of the national-security state. While there is no direct evidence for this precise claim, the early involvement of Parker and Thiel in the project, particularly given the timing of Thiel’s other activities, reveals that the national-security state was involved in Facebook’s rise. It is debatable whether Facebook was intended from its inception to be a LifeLog analogue or if it happened to be the social media project that fit the bill after its launch. The latter seems more likely, especially considering that Thiel also invested in another early social media platform, Friendster.
An important point linking Facebook and LifeLog is the subsequent identification of Facebook with LifeLog by the latter’s DARPA architect himself. In 2015, Gage told VICE that “Facebook is the real face of pseudo-LifeLog at this point.” He tellingly added, “We have ended up providing the same kind of detailed personal information to advertisers and data brokers and without arousing the kind of opposition that LifeLog provoked.”
Users of Facebook and other large social media platforms have so far been content to allow these platforms to sell their private data so long as they publicly operate as private enterprises. Backlash only really emerged when such activities were publicly tied to the US government, and especially the US military, even though Facebook and other tech giants routinely share their users’ data with the national-security state. In practice, there is little difference between the public and private entities.
Edward Snowden, the NSA whistleblower, notably warned in 2019 that Facebook is just as untrustworthy as US intelligence, stating that “Facebook’s internal purpose, whether they state it publicly or not, is to compile perfect records of private lives to the maximum extent of their capability, and then exploit that for their own corporate enrichment. And damn the consequences.”
Snowden also stated in the same interview that “the more Google knows about you, the more Facebook knows about you, the more they are able . . . to create permanent records of private lives, the more influence and power they have over us.” This underscores how both Facebook and intelligence-linked Google have accomplished much of what LifeLog had aimed to do, but on a much larger scale than what DARPA had originally envisioned.
The reality is that most of the large Silicon Valley companies of today have been closely linked to the US national-security state establishment since their inception. Notable examples aside from Facebook and Palantir include Google and Oracle. Today these companies are more openly collaborating with the military-intelligence agencies that guided their development and/or provided early funding, as they are used to provide the data needed to fuel the newly announced war on domestic terror and its accompanying algorithms.
It is hardly a coincidence that someone like Peter Thiel, who built Palantir with the CIA and helped ensure Facebook’s rise, is also heavily involved in Big Data AI-driven “predictive policing” approaches to surveillance and law enforcement, both through Palantir and through his other investments. TIA, LifeLog, and related government and private programs and institutions launched after 9/11, were always intended to be used against the American public in a war against dissent. This was noted by their critics in 2003-4 and by those who have examined the origins of the “homeland security” pivot in the US and its connection to past CIA “counterterror” programs in Vietnam and Latin America.
Ultimately, the illusion of Facebook and related companies as being independent of the US national-security state has prevented a recognition of the reality of social media platforms and their long-intended, yet covert uses, which we are beginning to see move into the open following the events of January 6. Now, with billions of people conditioned to use Facebook and social media as part of their daily lives, the question becomes: If that illusion were to be irrevocably shattered today, would it make a difference to Facebook’s users? Or has the populace become so conditioned to surrendering their private data in exchange for dopamine-fueled social-validation loops that it no longer matters who ends up holding that data?
Eighty-one percent of respondents said it was important “to limit the power of big tech companies in America.”
Just 8% disagreed, while 11% were unsure.
Tech companies such as Apple, Facebook, Google and Twitter have come under increasing scrutiny about censuring third-party content — especially after Twitter permanently banned Donald Trump in the final days of his presidency.
Google and Apple recently suspended Parler, the alternative social media site to Twitter, from their app stores.
Facebook and Twitter claimed that Trump posed the risk of inciting violence among his supporters, while Google and Apple argued that Parler was not doing enough to police violent content on its own servers.
Big Tech has also come under scrutiny for collecting data from users and limiting — or “silo-ing” — the content they see.