What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps up with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC Legal Law Firm, we closely follow the world agenda and present a selection of the current developments for your information with this content.

The news below belongs to November 2022.

ICO x Interserve Group

After the Information Commissioner’s Office (ICO) fined Interserve Group Ltd, a Berkshire-based construction company, £4,400,000 for failing to keep its employees’ personal information secure, the UK’s Information Commissioner John Edwards warned that companies are leaving themselves open to cyber attacks by ignoring critical measures such as software updates and staff training.

The ICO found that the Company failed to take the necessary security measures to prevent a cyber attack that gave hackers access to the personal data of up to 113,000 employees via a phishing email.

The data compromised in the breach included personal information such as contact details, national insurance numbers and bank account details, as well as special categories of data such as ethnicity, religion, disability information, sexual orientation and health information.

In his statements on the subject, Edwards stated that the biggest cyber risk faced by businesses is not from hackers, but from indifference within the company.

However, he made a reference to the fact that businesses can expect a similar penalty from the Information Commissioner’s Office if it is found that they do not follow suspicious activities taking place in their systems, do not act according to warnings, or their software is not up to date and does not provide training to their employees.

Stating that it is unacceptable for businesses that handle people’s most sensitive data to leave their doors open to cyber attackers, Edwards explained that the Interverse data breach has the potential for real harm as it leaves employees vulnerable to identity theft and fraud.

It was also reported that the ICO and the National Cyber Security Centre (“NCSC”) are already working together to offer advice and support to businesses, and will be meeting with regulators from around the world in the coming days to work towards a consistent international cyber guide for the protection of personal data wherever a company is located.

Edwards is also known to have attended the 44th Global Privacy Assembly (44th GPA) in Turkey in October, where more than 120 data protection and privacy authorities came together.

Although the company’s anti-virus protection detected the malware and sent an alert, because the suspicious activity was not thoroughly investigated, it was not detected that the attacker already had access to the company’s systems, and the attacker neutralised the anti-virus system, compromised 283 systems, 16 accounts, and encrypted and rendered unusable the personal data of 113,000 employees.

The manner in which the data breach occurred has once again demonstrated the importance of the administrative and technical measures required to be taken by companies today and the role of employees in implementing these measures.

The ICO investigation found that Interserve failed to follow up on warnings of suspicious activity, used outdated software systems and protocols, and had inadequate staff training and risk assessments, leaving Interserve vulnerable to cyber-attack.

In light of all the developments, the ICO sent a “notice of intent” to Interserve, which was a precursor to a potential fine, and set the fine amount at £4.4 million, and after Interserve’s explanations, the ICO imposed the fine without any reduction in the amount it determined.

AB x End-to-End Encryption

The European Union’s (“EU”) new regulation aimed at combating online sexual abuse of children has been criticised.

As is well known, strong end-to-end encryption is an essential part of a secure and reliable internet, providing users with protection in online transactions. It is a fact that this method also protects children to communicate safely and allows them to confidentially report online abuse and harassment.

The European Union’s new regulation requires internet platforms, including messaging applications that use end-to-end encryption such as WhatsApp and Signal, to “detect, report and remove” images of child sexual abuse on their platforms and to adapt a process called “client-side scanning” to provide this system.

In practice, this process would be a major breach of privacy, and there is no evidence that this system can be operated without undermining the security provided by end-to-end encryption. While this proposed regulation is well-intentioned, it is also expected to weaken encryption and make the Internet less secure.

In the New York Times news, which we also included in our previous What’s Happening in the World? newsletters, it was reported that Google labelled medical images of a San Francisco father photographing his son’s groin as sexual abuse material.

Although these photos were sent to his doctor to get medical advice for his child, it did not prevent the father’s account from being closed and becoming the subject of a police investigation. (You can find the details of the news in the 3rd issue of the What’s Happening in the World Newsletter, which you can access on our LinkedIn page).

The dominance of secure messaging apps on download lists came to the fore after the invasion of Ukraine, when citizens used end-to-end encrypted messaging services to communicate with friends and family. Similarly, the European Commission urged its staff to use Signal to protect their communications. Considering the period the world is going through, weakening encryption seems to be at a level that could be disastrous, especially in terms of EU security.

The EU’s proposals have already been criticised by privacy watchdogs, the European Data Protection Board (EDPB) and the European Data Protection Supervisor, who have issued a joint statement calling for changes to the regulations. They describe the proposals as “highly intrusive and disproportionate” and argue that regulations requiring platforms to weaken encryption violate the right to data protection, as well as the right to privacy and respect for family life under Articles 7 and 8 of the EU Charter of Fundamental Rights.

It is also considered impossible for platforms to weaken encryption only for users within the EU. Based on similar examples, it is inevitable that any reduction in security will affect users of this platform all over the world.

In the United Kingdom, where a similar legislation is proposed, WhatsApp has stated that it is ready to withdraw from this market if they need to weaken encryption. It is estimated that the same situation may occur throughout Europe.

UK Police x LFR

A recent report by the Minderoo Centre for Technology and Democracy (Minderoo Centre for Technology and Democracy) at the University of Cambridge has highlighted the need to ban the use of Live Facial Recognition (“LFR”) by police departments on the streets, at airports and in public places.

LFR is a technology that connects cameras to databases of people’s photographs and uses the images from these cameras to make matches, and according to the research, the use of LFR by police departments in public spaces violates human rights as well as ethical standards and raises concerns about racial bias.

UK police departments have used the technology in the belief that it is an aid in the fight against crime and terrorism, but in some cases the courts have ruled that the way in which the police use LFR constitutes a serious breach of the right to privacy for people going about their daily lives in public spaces where the technology is used. However, in overseas and more authoritarian regimes such as China, these technologies are also used as part of repressive tools.

The study analysed the LFR technologies of three different law enforcement agencies. Evani Radiya-Dixit, the author of the report, said in a statement that they found that all three applications did not meet the minimum ethical and legal standards for the use of facial recognition by the police.

While the report emphasises that this technology should be applied with a high level of values and principles, especially to protect human rights and increase accountability in the use of technology, the report states that the facial recognition system used by police departments does not include most of the known practices for the safe and ethical use of large-scale data systems, and that this problem goes far beyond the concern of bias in facial recognition algorithms.

These systems, which read the geometry of a face from a photograph or video and convert it into a unique code – a “faceprint” – involve significant biometric data processing and a balancing factor of measured application.

With help from the National Physical Laboratory and input from the Defence Science and Technology Laboratory, the algorithm has greatly improved its accuracy, with a false alarm rate of less than 0.08%.

Pete Fussey, who was hired by the Met to oversee previous LFR trials and has produced a critical report, said that live facial recognition is a highly intrusive technology, and that it has become difficult to defend it since the Court of Appeal made it clear in 2020 that South Wales police’s use of the technology was unlawful.

Despite this, South Wales police say they are developing their system “to ensure that there is no risk of breaching equality requirements through prejudice or discrimination” after 61 arrests made through the LFR.

Assistant Chief Constable Mark Travis said facial recognition technology was intended to “help identify serious offenders to keep the public safe and protect the community from individuals who pose significant risks”: “I believe the public will continue to support our use of all available methods and technologies to keep them safe, provided that what we do is legitimate and proportionate.” He also emphasises that the police force insists on using this technology.

While all these discussions are taking place, Parliament has remained silent, with no guidance yet in place to balance the potential security benefits of live facial recognition with the security measures that police use, such as the use of fingerprints and DNA.

ICO x Council of Ministers

The UK Information Commissioner has decided to reduce the £500,000 Monetary Penalty Notice (MPN) issued to the Cabinet Office in relation to the New Years Honours data breach in 2021 to £50,000, which the Cabinet Office has agreed to pay. This decision is said to reflect the Information Commissioner’s Office’s (ICO) new approach to working more effectively with public authorities.

What happened?

The Information Commissioner’s Office issued the penalty following an investigation into a data breach in 2019, when the Cabinet Office published a file on GOV.UK containing the names and uncensored addresses of more than 1,000 people who had been announced in the New Years Honours list. The personal data subject to the breach was made available online for 2 hours and 21 minutes, and the data was accessed 3872 times.

In December 2021, the Council of Ministers appealed to the First-Tier Tribunal that the level of the fine was “wholly disproportionate”. It is noteworthy that the appeal was not on the facts giving rise to the fine, but only on the amount of the fine.

According to the settlement agreed between the parties and approved by the court, the UK Information Commissioner agreed to reduce the fine to £50,000.

UK Information Commissioner John Edwards said: “The ICO is a pragmatic, proportionate and effective regulator focused on making a difference to people’s lives. While we consider the initial fine to be proportionate given the potential impact on people affected by the breach, we recognise the economic pressures currently facing public bodies. Since the fine was imposed last year, we have adopted a new approach to working more effectively with public authorities to raise standards of data protection. As we have explained, high fines in the public sector can sometimes not be a deterrent in themselves. With better engagement, including publicising lessons learnt and sharing good practice, I am prepared to use my discretion to reduce the amount of fines imposed on the public sector where appropriate.”

TikTok x EU

TikTok has told its European users that user data can be accessed by employees outside the continent, including in China, amid ongoing concerns over China’s political and country regulations regarding access to user information on the platform.

The Chinese-owned social video app said it has updated its privacy policy to confirm that staff in countries have been granted access to user data to ensure that platform experiences are “consistent, fun and safe”.

Other countries with access to European user data by TikTok staff include Brazil, Canada and Israel, as well as the US and Singapore, where European user data is currently stored.

Elaine Fox, TikTok’s privacy department manager in Europe, said: “We allow certain employees in our group of companies located in Brazil, Canada, China, Israel, Japan, Malaysia, the Philippines, Singapore, South Korea and the United States to remotely access TikTok European user data in order to fulfil the requirements of the business through methods recognised under the GDPR, subject to a series of robust security controls and consent protocols.”

It was reported that the data could be used to perform checks on some aspects of the platform, including the performance of its algorithms that recommend content to users, and to detect harmful automated accounts. TikTok had previously admitted that some user data was accessed by employees of ByteDance, the company’s parent company in China.

The privacy policy update, which applies to the United Kingdom, the European Economic Area and Switzerland and will be published on 2 December, will take place against the backdrop of political and regulatory pressures on the use of data generated by the application, which has more than a billion users worldwide.

US President Joe Biden has cancelled his predecessor Donald Trump’s executive orders ordering the sale of TikTok’s US business, and asked the US Department of Commerce to come up with recommendations to protect the data of US persons from foreigners. The Committee on Foreign Investment in the US, which reviews commercial agreements with companies outside the US, is also reportedly conducting a security review on TikTok.

Ireland’s data controller, which has jurisdiction over TikTok across the European Union, has also launched an investigation into the “transfer of personal data by TikTok to China”.

In accordance with the decision of the European Court of Justice called Schrems II, it is stated that in certain data transfers outside the European Union, the “level of protection” provided to the user’s data at the other end should be taken into account, with a particular focus on the access of state authorities. In a blog post published last year, TikTok stated that it was “in line” with the regulatory guidelines set by the Schrems II decision.

Last October, TikTok denied the report published in the business publication Forbes that these applications were used to “target” US citizens. In the report published in Forbes, it was claimed that TikTok planned to track the location of at least two people through the video sharing application.

In the privacy policy update, Fox stated that TikTok does not collect “precise location information” from users in Europe, whether based on GPS technology or not. In the current version, the privacy policy states that it may collect “precise location information (GPS, etc.) with permission”.

Incogni x Google Play

Incogni, a data destruction company, conducted a deep dive into the new Google Play Store data security section to uncover how much data apps are sharing and what security practices developers are using to protect their users’ personal information.

The research found that more than half of apps openly share user data, and even more apps collect and transmit user data.

Data is seen as the new oil of the digital world, and there is a $250bn+ industry fuelled by data trading. Therefore, it is not surprising that data collection and sharing over the internet, whether through browsers, visited websites or downloaded applications, is so widespread in today’s world.

Most Data Sharing Google Play Apps

The research found that 55.2 per cent of the apps in the study openly admitted to sharing data. Among these, Incogni also identified certain trends regarding which apps share more data than others. According to this;

Shopping, work and food and beverage categories were found to be among the app categories that share the most user data.

More interestingly, Incogni analysed 500 free and 500 paid Google Play apps and found that free apps share 7 times more data than paid apps.

Popular apps with more than 500,000 downloads shared 6.15 times more data than less popular apps.

It is likely that the fact that free apps are downloaded about 400 times more than paid apps plays an important role in the research findings, but in general, these findings support the idea that users pay for “free” apps with their personal data.

Types of Data Shared by Apps

In its list of the most commonly shared data types and the percentage of apps they analysed that share them, Incogni reported that some apps openly share highly sensitive data:

Approximate location histories (13.4 per cent)

E-mail addresses (6.77%)

Names (4.77%)

Home addresses (3.85%)

Precise location information (3.85%)

Photos (3.23%)

In-app messages (1.85%)

Videos (1.69%)

Sexual orientation information (0.62%)

Files and documents (1.54%)

SMS or MMS information (0.46%)

Information on race and ethnic origin (0.15%)

Religious and political beliefs (0.15%)

Google’s definition of “data sharing” does not include the transfer of data to service providers for legal reasons or anonymisation. In this sense, it highlights the fact that the types of sensitive data listed are shared with third parties, including marketers and data brokers who sell personal data for profit. Even in the anonymisation scenario, the researchers found that with as little as 15 data points, anonymised data can be correctly re-identified 99.98% of the time.

Social media applications and business applications were found to have the worst practices in this regard. Interestingly, some of the apps that collected the most data were among the apps that reported sharing the least, raising questions about transparency.

In particular, Meta applications such as Facebook, Messenger and Instagram were observed to collect the most user data. According to the findings, these apps collect 36 out of 37 possible data points, far above the 15 required to redefine anonymised data, and claim to share only 4 data types, despite being at a point to know almost everything about their users.

Is Our Data Really Safe?

In December 2021, a popular mobile payment service called Cash App experienced a data breach in which the personal information of 8.2 million users was leaked. This data was compromised, exposing consumers to fraud, phishing, identity theft, and even blackmail or extortion.

In the face of such serious threats, users expect applications to take appropriate security measures to protect their personal data. Incogni found that apps are failing in this area as well.

Almost none of the apps (with the exception of 0.8 per cent) passed an independent security review, and 4.9 per cent openly admitted that they do not encrypt personal data in any way during transmission. Worse still, more than half of the apps did not even make any claim to encrypt data in transit.

UK x EU

A prominent European Union (“EU”) MEP has described her meetings with the UK government about the country’s data protection reform plans as “appalling”.

French MEP Gwendoline Delbos-Corfield said that Digital Minister Julia Lopez felt “taken for a fool” after leaving halfway through the meeting, while UK Home Office ministers did not bother to meet with them and the UK’s data regulator. The Information Commissioner’s Office (ICO) had sent acting executive director Emily Keaney to the meeting instead of its director John Edwards. Delbos-Corfield said the ICO official “knew nothing about data protection” and was unable to elaborate beyond one-sentence answers.

UK Digital Secretary Michelle Donelan promised to replace the “crazy” data regulation rulebook the UK inherited from the EU, which is based on the GDPR. Despite this, the UK needs to maintain similar privacy standards to the EU to maintain data flows with the 27-member bloc.

On his meeting with UK government officials about the reform plans, the French MEP said: “It was horrible, it was all about growth and innovation and nothing about human rights. I never heard them say that protecting data is a fundamental right, even in Hungary they say that.”

Fulvio Martusciello, Italian MEP, said his impression from the visit was that the UK was “giving up privacy for commercial gain”, but added: “In Europe, the protection of the individual prevails; in the UK, the protection of the economy prevails.”

A UK government official dismissed the inferences of the MEPs, saying: “We were clear that we have a strong commitment to high data protection standards.”

Delbos-Corfield stated that the most worrying thing about the visit was the weakness of the ICO.

A spokesperson for the UK Department for Digital said: “The UK is committed to protecting people’s data and our reforms will strengthen the country’s trusted high standards, while making it easier for businesses and researchers to unlock the power of data to improve society and grow the economy.” Government ministers are understood to attend meetings for the longest possible time when they have daily pressures.

Clearview x EU

German activist Matthias Marx has a pale and broad face, with unkempt blonde hair on top, and claims that his face was stolen. So far, these features have been mapped and monetised by three companies without his consent. As has happened to billions of others, his face was turned into a search term without his consent.

Matthias Marx wanted to know if there were any photos of his face in the Clearview AI database, which collects billions of photos from the internet to create a huge database of faces, so he sent an email to Clearview to ask. A month later, he received a reply with two screenshots attached. The photos were about ten years old, but they were both of Matthias Marx. Matthias Marx knew that the photos existed, but unlike Clearview, he did not know that a photographer had sold them on the unauthorised stock photo site Alamy.

According to Matthias Marx, it was clear that Clearview had violated the GDPR by using his face or biometric data without his knowledge or consent. He therefore filed a complaint with the local privacy regulator in February 2020. This was the first complaint against Clearview in Europe, but it is still unclear whether the case has been resolved. A spokesperson for the regulator said the case had been closed, but Matthias Marx said he had not been informed of the outcome.

“It has been almost two and a half years since I filed a complaint against ClearView AI and the case is still open. Even if you take into account that this case is the first of its kind, it is very slow.”

Across Europe, the faces of millions of people appear in search engines operated by companies like Clearview. The region may boast some of the strictest privacy laws in the world, but European regulators, including Hamburg, are struggling to enforce them.

In October, the French data protection authority became the third EU regulator to fine Clearview 20 million euros for breaching European privacy rules. Clearview has still not removed facial data of EU citizens from its platform, while similar fines imposed by regulators in Italy and Greece remain unpaid.

Like other privacy activists, Matthias Marx does not believe that it is technically possible for Clearview to permanently delete a face. He believes that Clearview’s technology, which constantly scans the internet for faces, will find and catalogue it again and again. Clearview also did not respond to a request for comment on whether it could permanently delete people from its database.

Clearview told investors it is on track to have 100 billion photos in its database this year, an average of 14 photos for each of the planet’s 7 billion people.

According to CEO Hoan Ton-That, the way Clearview works – by sending bots to search for faces online and then storing them in a database – makes it impossible to prevent the faces of EU citizens from appearing on the platform. Comparing its product with other products on the market, he said that there is no way to determine whether a person is an EU resident from a publicly available photo on the internet alone, and that it is therefore impossible to delete the data of EU residents because Clearview AI only collects publicly available information from the internet, just like all other search engines such as Google, Bing or DuckDuckGo.

But privacy activists argue that the difference between searching with a name and searching with a face is crucial. “A name is not a unique identifier. A name is something you can hide in public, but a face is not something you can hide in public, unless you leave your house with a bag on your head,” they said.

Asking EU regulators to be more aggressive in their enforcement, Audibert said, “It’s very difficult for a European regulator to enforce a decision against a US company if the company is not willing to co-operate. This is really a test case to see what kind of restrictive power the GDPR has.”

Since 2020, Matthias Marx has discovered that photos of his face have been spreading. When he searched another facial recognition platform called Pimeyes for his face, the platform revealed even more photos than Clearview. One of the photos he found ironically depicts him giving a speech about privacy.

Pimeyes is technically different from Clearview because it doesn’t store faces in a database, but instead, according to privacy experts, when a user uploads a photo, it searches the internet for other photos related to that photo. Anyone can search the site for free, but a fee must be paid to access links to the photos.

CEO Giorgi Gobronidze emphasises that unlike Clearview, Pimeyes does not scan social media platforms such as Facebook, Twitter, VKontakte, etc. “The fact that we can theoretically scan social media does not mean that we should. He added that there are thousands of people who do not know that their photos are being used by different online sources and that they have a right to know about this situation.

“People can send opt-out requests with every free search, or ask for a specific photo to be removed, or to block further processing of that photo with a single click,” Gobronidze said.

Matthias Marx, on the other hand, said that the company should never have used his picture and that the company could only use his biometric data with his explicit consent.

In March this year, Matthias Marx discovered that Public Mirror had four images of his face in its files. Like other face search engines, it was not just the photos themselves that provided information about Matthias Marx, but the online links that accompanied them. Public Mirror’s links serve as a directory of media articles written about Matthias Marx or conferences where he has spoken.

Each of these platforms reveals deeply personal information. “You can find out where I work, which political party I support,” said Mathias Marx. This points to a sector that provides much more information than any social media profile with the photos collected by companies.

When Matthias Marx started addressing this issue in 2020, all he wanted was for a company to stop collecting photos of his face. The problem is now much bigger than that. Today, regulators are calling on the industry to completely stop collecting photos of Europeans.

Medibank x Data Breach

Medibank has urged its customers to maintain a high level of vigilance after cybercriminals began leaking sensitive medical records stolen from the Australian health insurance giant.

A ransomware group linked to the REvil gang began publishing the stolen records of customers, including their names, dates of birth, passport numbers and information on medical complaints, in the early hours of 9 November. “We believe there is a limited chance that paying the ransom will ensure the return of our customers’ data and prevent its publication,” Medibank said in a statement after Medibank said it would not pay the ransom demand.

Cybercriminals divided Australian breach victims into “bad” and “good” lists, with the former including numeric diagnosis codes that appeared to link victims to drug and alcohol addiction and HIV. For example, one record contained an entry labelled “F122”, which corresponds to cannabis dependence under the International Classification of Diseases published by the World Health Organisation.

It is also believed that the leaked data contained the names of high-profile Medibank customers, including senior Australian government MPs such as Prime Minister Anthony Albanese and Cyber Security Minister Clare O’Neil.

Some of the leaked data also appears to include correspondence of negotiations between the cybercriminals and Medibank CEO David Koczkar.

Screenshots of WhatsApp messages were also leaked, suggesting that the ransomware group planned to leak “keys to decrypt credit cards”, despite Medibank claiming that no banking or credit card details were accessed. “Based on our investigation into this cybercrime to date, we believe the culprit did not access credit card and banking information,” Medibank spokesperson Liz Green said in an emailed statement on 9 November.

The gang of cybercriminals behind the Medicare ransomware attack, based on a variant of REvil’s file-encrypting malware, has so far leaked the personal data of about 200 Medibank customers from the data the group claims to have stolen. Medibank confirmed that cybercriminals accessed the personal information of about 9.7 million customers and health complaint data of about 500,000 customers.

In light of the data leak, which exposed highly confidential information that could be misused for financial fraud, Medibank and the Australian Federal Police are urging customers to maintain a high level of vigilance against phishing scams and unexpected activity on online accounts. Medibank is also advising users not to reuse passwords and to enable multi-factor authentication on all online accounts where the option is available.

In its latest update, Medibank braces itself for the situation to worsen, saying that it “expects the criminal to continue publishing files on the Dark Web”. Cybercriminals have said they plan to continue partially publishing data on the Dark Web leak site, including merges, source codes, bill of materials and some files obtained from the Medibank file system from different hosts.

It is not yet known whether Medibank customers will receive compensation following the breach or whether Medibank will sue for failing to protect users’ confidential medical data. The breach comes just weeks after Australia approved a legislative change to the country’s privacy laws following a lengthy consultation process on the reforms. The Privacy Legislation Amendment Bill 2022 will increase the maximum penalties that can be imposed under the Privacy Act 1988 for serious or repeated privacy breaches and provide greater powers for the Australian information commissioner.

Twitter x Dutch Users

The Dutch Data Protection Authority is preparing to file a class action lawsuit against Twitter, which collected and sold privacy-sensitive data on behalf of 11 million Dutch residents without authorisation through the advertising company MoPub.

MoPub collected people’s data on about 30,000 popular mobile apps from 2013 to 2021, including Buienradar, Flitsmeister, Duolingo, Wordfued, Vinted and Grindr, and Twitter sold the advertising company earlier this year.

“Almost all of the more than 30,000 apps on our list had a small piece of code from MoPub,” said Anouk Ruhaak, head of the Dutch Data Protection Authority, “The code stores when you launched the app, where you were and how long you used the app. If you combine all the data, you get a picture of a person.”

Ruhaak said MoPub sells the data and companies use it for targeted adverts.

“You think you’re getting an ad, maybe you don’t care but what you don’t realise is that you’re getting ads that are different from others and that can affect your behaviour. For example, a 15-year-old with an eating disorder is using a carb counting app and getting adverts for diet programmes. This can be very harmful.”

Another example Ruhaak gives is a woman who wants to have a child and uses a mobile app that tracks her menstrual cycle, and companies know that she wants to have a child. Therefore, the chance of receiving adverts about baby socks increases. Although innocent so far, it is claimed that there are vacancies in some companies, but these positions are not shown to her for this reason. They don’t want to hire someone who will soon be going on maternity leave, using targeted advertising to discriminate and increase gender inequality.

Ruhaak does not know whether the companies behind the 30,000 mobile apps in question were aware of MoPub’s data collection. It is also not clear whether advertisers are aware that they are using illegally obtained data.

The Dutch Data Protection Authority has launched an awareness-raising campaign to inform affected Dutch residents that it is filing a class action lawsuit on their behalf. The aim is to obtain compensation for all injured parties. Although an amount for the claim has not yet been determined, it has been stated that in the past, between 250 and 2500 euros per person have been paid for precedent cases.

FIFA World Cup x Qatar

Around 1.5 million visitors are expected to travel for the 2022 World Cup, which will be held in Qatar from 20 November to 18 December. Foreigners visiting the country are required to download two mobile apps, the official World Cup app Hayya and the Covid monitoring app Ehteraz.

Experts characterise the mobile apps as a form of spyware because they give the Qatari authorities wide access. This access is said to include reading, deleting or modifying the content of users’ data and even searching directly.

France’s data protection authority (Commission Nationale Informatique & Libertés, “CNIL”) is telling football fans how to protect themselves from surveillance by Qatar World Cup mobile apps. “Ideally, travel with an empty smartphone or an old phone that has been reset,” says a CNIL spokesperson, “and pay particular attention to photos, videos or digital artefacts that could put you in a difficult situation in terms of the country’s legislation.”

The sporting event has been plagued by controversy, including allegations of bribery and corruption, exploitative working conditions, concerns about Qatar’s treatment of LGBTQ+ people and media freedom.

The Norwegian data protection authority is also expected to advise travelling football fans to install mobile apps on a burner phone. For travelling football fans with a disposable phone, CNIL has other tips to limit the impact of spyware.

The CNIL recommends that visitors only install the mobile app just before leaving their country and delete it as soon as they return to France. They are also encouraged to limit online connection to services that require authentication to a minimum, to keep their smartphone with them at all times and have a strong password, and to limit system authorisations to those that are strictly necessary.

According to CNIL’s map of privacy rules around the world, Qatar has a framework in place but is not recognised by the European Union as providing similar protection compared to the book of data protection rules.

Other European authorities have similar doubts and concerns about Qatari mobile apps. The German Federal Foreign Office, the Federal Office for Information Security and the Commissioner for Data Protection and Freedom of Information are investigating both apps.