See Corresponding Action Alert for ACR 219
UPDATE: 8/31/2024 - ACR 219 died, it failed to be heard in the Assembly Privacy and Consumer Protection Committee before the 2024 session ended on 8/31/2024.
UPDATE: 8/6/2024 - The hearing for ACR 219 was cancelled at the request of the author on 8/6/2024. ACR 219 is in the Assembly Privacy and Consumer Protection Committee. Continue to contact the Assembly Privacy and Consumer Protection Committee Members, your Assemblymember, and Senator, and ask them to OPPOSE ACR 219. The resolution can be rescheduled for a vote anytime.
UPDATE: 7/15/2024 - ACR 219 is scheduled for an informational hearing in the Assembly Privacy and Consumer Protection Committee on Wednesday, 8/7/2024 at 1:00PM in Room 444 of the State Capitol. View the hearing agenda and details HERE. Position letters can be submitted HERE. View the reference sheet for the position letter portal HERE and the FAQ's HERE. Continue to contact the Assembly Privacy and Consumer Protection Committee Members until a vote is taken. Sometimes a vote is not taken on the day of the hearing.
ACR 219 was introduced in the Assembly and referred to the Assembly Privacy and Consumer Protection Committee on 6/24/2024. This resolution is sponsored by Assemblyman Josh Lowenthal.
ACR 219 urges social media platforms to monitor and remove harmful and dangerous content as part of California Social Media Users' Bill of Rights. This resolution establishes the following as summarized:
1. Establishes that social media platforms have a civic duty to undertake vigorous efforts and must expend sufficient resources to monitor for and remove harmful and dangerous content as quickly as possible, regardless of potential absence of legal duty. This content includes anything that a person believes could cause substantial physical and emotional harm, especially to children.
2. Encourages platforms to provide accurate information on elections and democratic procedures while removing election disinformation and misinformation.
3. Encourages platforms to provide reasonable methods for making reports of possible violations of platform content rules and to be informed on the status and outcome of their user-submitted reports and appeals.
4. Encourages social media platform designs and policies to consider all languages, ages, and other contexts for their users.
5. Encourages platforms to have initial user privacy settings set at the maximum privacy protection level.
6. Encourages platforms to strictly protect the data of children and to provide easily accessible tools for parents or guardians that prevent a minor's access to inappropriate content and the targeted advertising to children.
7. Encourages platforms to easily obtain personal data in a commonly used format so the user can reuse, request change, or transmit that data.
8. Encourages platforms to provide easily located, concise, and user-friendly usage, privacy, and terms of service policies consistent with the approach taken in the European Union.
9. Expects that platforms will study and reduce negative effects in their algorithms and artificial intelligence (AI) tools that may cause harm to users. Platforms should work with vetted independent experts to identify and study potential harm, improve intervention effectiveness, and make data available to vetted researchers that will help the platform develop strategies to reduce harm and risks.
10. Encourages platforms to be informed with easily located, concise, and user-friendly explanations about the platform's algorithms, AI, and other tools for retaining users.
On the surface, it appears the resolution is trying to get social media companies to take voluntary proactive steps to protect the consumer, but its broad language leaves the door open for consumers’ rights to be abused. Much like a wolf in sheep’s clothing, ACR 219 signals that it is the legislature’s preference that social media companies engage in dangerous and unconstitutional censorship.
This resolution gives the following arguments to justify the request to police the internet, summarized as follows:
While the First Amendment to the United States Constitution limits the ability of the government to restrict speech, it does not constrain the ability for private entities like social media companies to prevent their platform from spreading "hate and disinformation." Today, these social media companies are largely unregulated and have acquired unprecedented influence over which ideas, information, and perspectives that people are exposed to every day. These companies continue to use the immunity shield provided by Congress to absolve themselves from the responsibility for minimizing harmful and abusive content posted.
Social media companies have designed their platforms to enable instant and widespread hate, harassment, bullying, and disinformation through their pursuit of user engagement and advertisement revenue. The level of social responsibility demonstrated by different platforms varies substantially. Some platforms strive to enforce content policies, while other are pulling back from earlier efforts to stop the spread and reach of harmful disinformation at a time that our democracy may be at serious risk. Californians and their leaders reject threats and harmful disinformation in the offline world, so we should just the same reject harassment, bullying, disinformation, and content that endangers and addicts our citizens, as well as threatens our free and fair elections, and should have no place on online platforms.
NVIC OPPOSES ACR 219 because it gives a free pass to social media companies to censor anyone who voices their opposition or concerns regarding vaccination or vaccine policies. During the COVID-19 pandemic, social media companies censored users concerning "harmful and dangerous content" through an inaccurate bias that favored vaccines in spite of dangerous reactions and even cases concerning death after vaccination.
People’s real vaccine reaction experiences and credible medical professionals' concerns that didn’t fit the CDC’s tightly controlled false narrative on vaccine safety and efficacy were removed from social media platforms. At the time, this was one of the only ways people could learn about the harm being caused by the vaccine. Blocking legitimate risks to the vaccine from being posted and shared prevented others from true informed consent in the vaccine decision making process. This led to more people being injured or killed by the COVID-19 vaccines.
This resolution contains some good suggestions, but they are overshadowed by legislators’ invitation for social media companies to censor a user's constitutional right to free speech.
While ACR 219 is not binding, it heavily encourages social media companies to repeat the same free speech violations against users as those that occurred during the COVID-19 pandemic.
For example, in March 2020, global social media platform Twitter implemented a COVID-19 "misinformation" policy that would add labels and warning messages on information about COVID-19 and COVID-19 vaccines that did not align with government policy. This policy led to the suspension of countless users, some of whom were placed into a "lifetime ban" category. These policies were reversed when Elon Musk purchased Twitter in 2022, and he disabled Twitter verification and content moderation features that were used to censor posts. Under Musk’s leadership, Twitter also withdrew from the European Union’s “Code of Practice on Disinformation.”
In 2023, Mark Zuckerberg, co-founder of Facebook and CEO of Meta, released a statement regarding the "misinformation" category Facebook used liberally when flagging and suspending users during the pandemic. Zuckerberg admitted that many from the "establishment" waffled on facts and asked Facebook to censor many statements made during the COVID-19 pandemic that were later determined to be true.
Here are a couple of verifiable facts regarding vaccines that are still censored on certain social media platforms. Vaccines, just like all pharmaceutical products, can cause injury and even death. As of July 1, 2024, the United States Government has paid out more than $5.22 billion to vaccine victims through the National Vaccine Injury Compensation Program (VICP). As of June 28, 2024, there were 48,101 deaths and 2,614,501 adverse events reported to the US Government's Vaccine Adverse Events Reporting System.
As the truth on the COVID-19 censorship continues to be revealed, several lawsuits have been and will most likely continue to be filed against government agencies and social media companies. In May of 2022, former Missouri Attorney General, Eric Schmitt and Louisiana Attorney General Jeff Landry, brought the lawsuit, Missouri v Biden, in an attempt to expose government officials and the coordinated efforts of Big Tech companies such as Twitter, Meta, YouTube, and Facebook to censor information related to COVID-19, COVID countermeasures, and election integrity. This lawsuit names sixty-seven (67) U.S. federal government agencies and officials, including senior officials in the White House, for allegedly engaging in a widespread campaign of pressuring and colluding with social media platforms to censor users in violation of the First Amendment of the U.S. Constitution.
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240ACR219 - text, status, and history of ACR 219
|