This week, Meta announced that they will be changing its fact-checking policy. Instead of having a third-party fact-check information, they will now shift to community notes. Initially, these safeguards such as fact checking were implemented to limit the spread of misinformation. This change will have an impact on Black communities worldwide, and Black media, and will further degrade the validity of the information that social media users are engaging with while online.

Misinformation targeting the Black community has been prevalent for years. As recently as last year, during the election, studies identified multiple sources and drivers of misinformation targeting the Black community. Black voters need to know voting sights, ID requirements, deadlines, information about candidates, and policies. Having an unfiltered social media environment puts a lot at risk regarding the proper information being transmitted to the proper communities.

This isn’t the first time Meta has been on the wrong side of a misinformation issue. During the 2016 election, Cambridge Analytica and Facebook were at the center of various data collection issues. Although the Trump campaign denied using the information, reports surfaced that Cambridge Analytica identified the Black community as a community that was ripe for deterrence. Advertisers with access to this could target those voters who were identified and feed them information that would be more likely to sway them. The truthfulness of that information, for whoever uses it, wouldn’t be as important as the secured vote. So Meta changing its policy opens the door for history to possibly repeat itself when it comes to data and targeting.

On platforms such as Meta, at the time, a fact checker would recognize the misinformation and flag it or take it down. On X, which removed many of its guardrails for misinformation, the environment during the election was relatively unchecked by the social media platform itself but instead relied on community notes.

The danger with being on social media and not having any sense of what’s fact or fiction is that the spread and adoption of false information can harm communities that need factual information. Not only that, stereotypes and harmful narratives about the Black community have the chance to be spread and amplified. The algorithms reward engagement, so when people post certain information and it’s engaged with, the companies will amplify it to more people. This feedback loop already has had negative effects on Twitter and now will also be prevalent on Meta.

Another potential issue will be the impact that this had on Black businesses. Businesses are dependent on their good reputations. Without the fact-checking abilities of Meta, what may happen is that people will be able to spread false or misleading information about Black businesses to drive business away from them. The information can be about their products, practices, or their teams. In a time where there has been an increase in how many Black households own businesses, their online footprint being accurate and honest is important.

In light of all this, what can be done to counteract this, if anything? The reality is that we have to play with the cards we’ve been dealt. So, hopes of Meta reversing its policy are not beneficial. One of the main results that may come from this series of events is that we may need to turn to Black media more. We may become highly dependent on receiving the truth from our own. The responsibility of Black media to keep the audience well-informed and engaged is only heightened in times of trouble.

Black creatives and writers will have the task of ensuring that we’re collectible informed and inspired. It is yet to be seen what will come of Meta now that certain safeguards have been removed but we can only assume and hope that as a community our dependence on each other increases as our trust for others may decrease.