BDAR

The number of AI-generated deepfakes is growing — time to act

Print
deepfake005.png

The Radio and Television Commission of Lithuania (RTCL) warns that the spread of artificial intelligence (AI)-generated deepfake videos is increasing and could soon pose a serious threat to information integrity and public trust. The Commission stresses that action must be taken now to address legal and technological gaps that hinder the identification and removal of fake content.

Mantas Martišius, Chairman of the RTCL, says that the prevalence of AI-generated deepfakes in Lithuania has not yet reached a critical level, though he warns the situation is likely to deteriorate over time. He emphasizes that such content can cause confusion in society and be used for propaganda or to spread panic. The biggest challenge, he notes, is that national law currently does not provide sufficient powers to identify and catch distributors of realistic-looking deepfakes in a timely manner.

“With the rise of social networks in Lithuania, we haven’t yet reached a point where every second post is a video fake, but such content is definitely becoming more frequent,” Martišius told ELTA.

Meanwhile, Andrius Katinas, Head of the RTCL’s Business Supervision Department, whose daily work involves searching for and identifying fake content, notes that deepfakes are not problematic in themselves as long as they do not infringe on the rights of individuals or authors.

“Responsibility arises when the creators of this content violate the law — image, copyright, or related rights — or spread disinformation, war propaganda, or incite hatred,” emphasized A. Katinas.

He noted that detecting and identifying AI-generated content is complicated by its sheer volume and repeated reuploads.

“Although the platforms themselves remove some of the prohibited content, it reappears on social platforms when new accounts, channels, or videos are created. Even after the posts are removed, they have already caused damage: secondary and tertiary posts are created based on the hostile ones, further developing negative narratives. The extent of the damage increases when such posts are shared by public figures with many followers,” said A. Katinas.


Tools Exist, But Authority Is Lacking

Katinas emphasized that many processes involved in searching for fake content can be automated, but the final decision on classification, harmfulness, or removal of such material is still made by a human being — and requires considerable resources.

M. Martišius adds that the situation is complicated by the attitude of large social media companies, which do not always disclose violators because they are not legally obliged to do so.

“We can reach out to social network administrators, but we have no legal means to compel them to identify the person behind the post. Sometimes we receive a simple response from social network representatives that the content was posted by, for example, Artur65746, but they do not investigate who is behind that username,” said Martišius.

“Even if we manage to remove fake content aimed at the Lithuanian audience, it is extremely difficult to identify the person responsible. Such content may be created automatically and in an organized manner outside Lithuania, and its creators or clients may be hiding under the umbrella of hostile state services,” added A. Katinas.


The Legal Framework Lacks the Necessary Powers

Martišius pointed out that Lithuanian national law does not precisely define the conditions under which the person creating or distributing fake content must be disclosed.

“According to the internal rules of social networks, we are usually unable to identify the person who disseminated the content. If a serious incident occurs, the prosecutor’s office or the police can certainly obtain the data, but the RTCL does not have such powers. We can only ask social networks to voluntarily disclose the creators,” said Martišius.

Katinas adds that the technical implementation of legislation should go hand in hand with more active use of open-source intelligence (OSINT) tools.

“The RTCL has been using OSINT intelligence for several years in investigations into prohibited information or copyright infringements, and, in cooperation with foreign agencies, has noticed that there is a lack of expertise at the specialist level,” the expert notes.


Raising Public Awareness Remains Key

According to M. Martišius, it is crucial to include provisions in the national legal framework that clearly define issues related to counterfeit image content — such as authorship and liability.

“Once we have sorted things out internally, we will still need to cooperate with the professional community, EU countries, and international partners to make such synthetic, fake images easier to track and distributors easier to identify. This will certainly be a serious problem in the future, especially as technology advances so rapidly,” he said.

Alongside the institutional fight against fake content, A. Katinas emphasizes that it is particularly important to educate consumers.

“Social media users should critically filter the information they receive — rely on verified profiles, and use media pluralism to evaluate random content pushed by algorithms. If you come across harmful or fake content, we encourage you to contact the RTCL,” he advised.

 

Share:
Last updated: 28-10-2025
To top