Record Number of Reports on Child Abuse and Nazi Content Online

This nearly tripled the number of reports compared to 2023 (33,349 cases). However, almost 66,000 reports could not be verified because the websites were technically inaccessible, Stopline director Barbara Schloßbauer announced at a press conference on Thursday.
Of the approximately 24,000 analyzed reports, 47 percent actually showed illegal content. This is a "significant increase compared to previous years," said Schloßbauer. The vast majority of the content classified as illegal involved depictions of sexual abuse of minors (11,169 cases). In 93 cases, a connection to NS re-engagement was identified, according to the annual report. In 2023, there were only 22 confirmed cases. Schloßbauer and Stefan Ebenberger, Secretary General of the Internet Service Providers Austria, attributed the sharp increase in numbers to a mix of causes: The reporting office Stopline has become more well-known and there is a greater awareness among the population. Additionally, it is suspected that reporters are also using technical tools to search websites for illegal content. The increase in reports is fundamentally good, as it also leads to more illegal content being deleted "and that is the main goal," said Ebenberger.
Reporting Office Stopline: Outliers in February and December
There were outliers in the statistics in February and December 2024. In February, not only did almost 2,500 reports come in, which is above average. It was also unusual that many of the contents were hosted in Austria. Normally, Austria is not among the most frequent origin countries. However, the 2,480 reports of depictions of sexual abuse of minors could be traced back to a case where a platform with thousands of illegal contents was running through an Austrian hosting provider. The reports came from the British partner hotline Internet Watch Foundation, which counted all images and videos as individual reports. All content could be deleted very quickly, it was said.
In December, there was another unusual situation: Within a few hours, the reporting office received nearly 66,000 reports. The reports led to four file-sharing platforms, and it is assumed that they came from a single sender. The problem: The staff at Stopline had no access to the platforms and could not verify the reports. Attempts were made to gain access and providers or platform operators were asked to check the content, but this was not successful. "There are also cases where we have to recognize that we reach our limits and cannot proceed as a private reporting office," said Schloßbauer.
Origin Countries in Europe
Even though Austria is at the top of the origin countries in 2024 due to the statistical outlier, the Netherlands have actually been at the forefront for several years. Ukraine has also been one of the main hosting countries for two to four years. However, Stopline suspects that these are virtual servers and that the infrastructure is not actually located in Ukraine. Although the partner hotlines in these countries are very committed, there is the problem of "bulletproof hosting": Providers are well paid not to remove illegal content and also accept administrative penalties. It is very difficult to tackle this business model, said Ebenberger.
Self-Created and AI-Generated Content
In the past seven to eight years, there has also been an increase in content believed to be self-created by children and adolescents. While in the past almost all cases involved depictions of abuse, many young people today have no hesitation in posting photos of themselves on the internet where they are clearly identifiable, said Schloßbauer. She estimates the share of such content to be about one-third of the reports. There is a need for a lot of education and awareness - also among parents - that these photos and videos can be spread uncontrollably. Even if young people create the sexual depictions themselves, it is illegal to distribute them to larger groups or upload them to platforms. AI-generated content is also becoming an issue, but it is currently still mostly possible to distinguish them from real photos and videos. When this is the case, the content is not criminally relevant.
(APA/Red)
This article has been automatically translated, read the original article here.