TikTok will launch an in-app campaign next month to reach users in each of the European Union's 27 member states and direct them to “trusted information” as part of preparations to combat disinformation risks related to this year's local elections. We are launching localized election resources.
“Next month, we will be launching in-app election centers in local languages for each of the 27 EU countries to help people easily distinguish fact from fiction. Local electoral commissions and civil society organizations “In collaboration with TikTok, these election centers will be a place where our community can find trustworthy and authoritative information,” TikTok wrote today.
“Videos related to European elections will be labeled to direct people to the relevant election center. As part of our wider election integrity work, we want to ensure that people follow our rules and follow our rules. We'll also add a reminder to the hashtag to encourage you to fact-check and report content you believe violates our community guidelines.'' Discussing preparations for the 2024 European elections. added in a blog post. election.
This blog post also discusses what the company does in relation to targeted risk, which takes the form of influence operations that use tools to covertly attempt to deceive and manipulate opinion in order to distort elections. Explaining. That is, they set up a network of fake accounts and use them to spread and promote inauthentic content. Here, the company promises to introduce a “dedicated covert influence operations report”, which it claims will bring “further transparency, accountability and industry sharing” to covert intelligence operations. ing.
TikTok said the new covert influence operations report will be launched “in the coming months” and will likely be hosted within its existing Transparency Center.
TikTok has also announced that it will launch nine more media literacy campaigns in the region (27 in total after launching 18 last year), thus ensuring the campaign is launched in all EU member states. It seems like they are trying to fill in the gaps so that it can be implemented).
It also says it aims to expand its network of local fact-checking partners, saying it currently works with nine organizations covering 18 languages. (Note: The EU has 24 “official” languages and a further 16 “recognized” languages. Languages used by immigrants are not included.)
Notably, however, the video-sharing giant has not announced any new measures related to election security risks related to AI-generated deepfakes.
In recent months, the EU has increased its focus on generative AI and political deepfakes, requiring platforms to put safeguards against this type of disinformation.
A TikTok blog post by Kevin Morgan, TikTok's head of safety and integrity in EMEA, warns that generative AI technology brings “new challenges around misinformation.” The platform also states that it does not allow “manipulated content that is potentially misleading” (including AI-generated content about celebrities that “depicts support for political views”). are doing. However, Morgan questions how successful (or likely) it is currently in detecting (and removing) political deepfakes, where users choose to defy bans and upload politically misleading AI-generated content. The company has not provided details on whether or not it is.
Instead, he writes, TikTok requires creators to label AI-generated realistic content, and a tool was recently released to help users manually label deepfakes. It warns that it has happened. However, the post does not provide details on how TikTok will enforce this deepfake labeling rule. Details about how the company is addressing the risks of deepfakes more generally, including in relation to election threats, were also not disclosed.
“As technology evolves, we will continue to enhance our efforts, including collaborating with the industry through content provenance partnerships,” is the only aside TikTok offers here.
We asked the company a series of questions seeking more details about the steps it is taking in the lead-up to the European elections. This includes asking where the EU is focusing its efforts and what gaps are present (language, facts, etc.). Check and Media Literacy Coverage), we will update this post if we receive a response.
New EU requirements to combat disinformation
Elections to the new European Parliament are due to be held in early June, and the European Parliament is increasing pressure on social media platforms in particular to prepare. Since August last year, the EU has introduced new legal instruments to force action on around two dozen larger platforms designated as subject to the most stringent requirements of the relaunched online governance rulebook.
Until now, the bloc has relied on self-regulation, also known as a code of conduct against disinformation, to encourage industry action to combat disinformation. However, the EU also said that signatories to this voluntary initiative, including TikTok and most other major social media companies (with the exception of X/Twitter, which was removed from the list last year), are not doing enough. I have been dissatisfied with this for years. Increasing information threats, including local elections.
The EU Disinformation Code was launched in 2018 as a limited set of voluntary standards with a small number of signatory states committing to broad responses to disinformation risks. It was then strengthened in 2022, adding more (and “more detailed”) commitments and measures. Additionally, the list of signatories has grown, including a wide range of players whose technological tools and services may play a role in the disinformation ecosystem.
Although the strengthened norms are still not legally binding, the European Commission, the EU's executive arm and enforcer of the online rulebook for large digital platforms, has announced that the (legally binding) norms said that it would take into account compliance with the Code when assessing compliance with relevant elements of the Code. Digital Services Act (DSA) – Requires major platforms, including TikTok, to identify and take steps to mitigate systemic risks arising from the use of technology tools, including election interference.
The Commission's regular reviews of the performance of code signatories typically include lengthy public talks by commissioners, and encourage more consistent moderation and fact-checking, particularly in smaller EU member states and languages. warns that platforms need to step up their investment. Platforms’ main response to negative EU PR is to make new claims that they are taking action or doing more. And usually the same pantomime he performs six months or a year later.
But now that the bloc has finally enacted legislation to force action in this area, in the form of the DSA, which started applying to large platforms in August last year, this “disinformation needs to be better.” The loop of “no” may change. Therefore, the committee currently Consulting for detailed election security guidance. The guidelines apply to: Nearly 20 companies are designated as Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs) under the regulations and have legal obligations to reduce the risk of misinformation.
If targeted platforms fail to address the threat of disinformation, the risk is found to be a violation of the DSA, with fines for violators potentially increasing to 6% of global annual revenue. there is. The EU will hope that this regulation will ultimately focus the minds of big tech companies on tackling the problems that undermine society. This is a problem for ad tech platforms that have commercial incentives to increase usage and engagement. In general, I chose to continue hanging out and dancing for years.
The Commission itself is responsible for enforcing the DSA on VLOPs/VLOSEs. Ultimately, it will determine whether TikTok (and other targeted platforms) has sufficiently addressed the risk of misinformation.
In light of today's announcement, it appears that TikTok is stepping up its approach to regional intelligence-based risks and election security risks, making it more comprehensive, which will lead to the commission's common complaints, but comprehensive fact-checking resources continue to be lacking. The official languages of the EU are noteworthy. (However, the company relies on finding partners to provide these resources.)
TikTok says its upcoming election center, which will be localized to all official languages of the 27 EU member states, could ultimately play a key role in combating the risk of election interference. Effective in encouraging users to respond more critically to questionable political content published by apps, such as encouraging users to take steps to verify veracity by following links to trusted sources. Assume that this proves to be true. However, much depends on how these interventions are presented and designed.
It is also worth noting that the Media Literacy Campaign has expanded to include all EU Member States, again running into the Commission's frequent complaints. However, it is unclear (we asked) whether all these campaigns will take place before the European elections in June.
Elsewhere, TikTok's behavior seems more like a treadmill. For example, the platform's last disinformation control report, submitted to the European Commission last fall, noted how the platform expanded its synthetic media policy to cover AI-generated and AI-altered content. Ta. But he also said he wants to further strengthen enforcement of the synthetic media policy over the next six months. However, today's announcement does not include any new details about its enforcement capabilities.
In a previous report to the European Commission, it said it wanted to consider “new products and initiatives that help strengthen detection and enforcement capabilities” regarding synthetic media, including in the area of user education. Again, it's not clear whether TikTok has made much progress here. However, the broader issue is that even though platforms like TikTok allow users to spread AI-generated fakes very easily, there are no robust methods (technology or techniques) for detecting deepfakes. is the lack of it. Widely.
This asymmetry may ultimately require other types of policy interventions to effectively address AI-related risks.
Regarding TikTok's claimed focus on user education, will additional regional media literacy campaigns run through 2024 aim to help users identify risks created by AI? It has not been made clear. Once again we asked for further details there.
The platform was originally registered under EU disinformation laws in June 2020. But it has found itself facing growing mistrust and scrutiny in the region as security concerns related to its China-based parent company grow. Add to that the application of the DSA last summer and a major election year looming for the EU, so TikTok and other companies are likely to face the European Commission's head-on over disinformation risks in the near future. Looks like we'll have to set our sights on it.
Company X, owned by Elon Musk, has the dubious honor of being the first to be formally investigated regarding the DSA's risk management requirements and a host of other obligations, but the commission said the company may have violated them. Are concerned.