YouTube has said it will remove content containing misinformation or disinformation on approved vaccines, as that content poses a “serious risk of egregious harm”.
“Specifically, content that falsely alleges that approved vaccines are dangerous and cause chronic health effects, claims that vaccines do not reduce transmission or contraction of disease, or contains misinformation on the substances contained in vaccines will be removed,” the platform said in a blog post.
“This would include content that falsely says that approved vaccines cause autism, cancer or infertility, or that substances in vaccines can track those who receive them. Our policies not only cover specific routine immunizations like for measles or Hepatitis B, but also apply to general statements about vaccines.”
Exceptions to the rules do exist: Videos that discuss vaccine policies, new trials, historical success, and personal testimonials will be allowed, provided other rules are not violated, or the channel is not deemed to promote vaccine hesitancy.
“YouTube may allow content that violates the misinformation policies … if that content includes additional context in the video, audio, title, or description. This is not a free pass to promote misinformation,” YouTube said.
“Additional context may include countervailing views from local health authorities or medical experts. We may also make exceptions if the purpose of the content is to condemn, dispute, or satirise misinformation that violates our policies.”
If a channel violates the policy three times in 90 days, YouTube said it will remove the channel.
The channel of one anti-vaccine pushing non-profit, the Children’s Health Defense that is chaired by Robert F. Kennedy Jr, was removed. Kennedy claimed the channel’s removal as a free speech issue.
Meanwhile, the BBCreported that Russia threatened to ban YouTube after a pair of RT channels in German were banned for COVID misinformation.
YouTube said when announcing its expanded policy, it has removed over 130,000 videos for violating its COVID-19 vaccine policies since last year. In August, the video platform said it removed over 1 million COVID-19 misinformation videos.
Earlier this year, Twitter began automatically labelling tweets it regarded as having misleading information about COVID-19 and its vaccines, as well as introducing its own strike system that includes temporary account locks and can lead to permanent suspension.
While the system has led to the repeated suspension of misinformation peddlers such as US congresswoman Marjorie Taylor Greene, the automated system cannot handle sarcasm from users attempting humour on the topics of COVID-19 and 5G.
In April, the Australian Department of Health published a page attempting to dispel any link between vaccines and internet connectivity.
“COVID-19 vaccines do not — and cannot — connect you to the internet,” it stated.
“Some people believe that hydrogels are needed for electronic implants, which can connect to the internet. The Pfizer mRNA vaccine does not use hydrogels as a component.”
- YouTube has removed over 1 million coronavirus misinformation videos
- Facebook sparks outrage by shutting down NYU misinformation study
- Facebook fights Biden claim that social media is ‘killing people’ through anti-vax, COVID-19 misinformation spread
- Facebook vows stronger action against repeat misinformation-sharing offenders
- Passport info and healthcare data leaked from Indonesia’s COVID-19 test-and-trace app for travelers