What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps pace with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC LEGAL, we closely follow the world agenda and present a selection of the current news for your information with this content.

The following news belongs to December 2023.

Information Commissioner’s Office x United Kingdom

The Information Commissioner has warned the UK’s leading websites to make cookie changes in line with data protection law or face enforcement action.

Some websites do not give users a fair choice about whether they are tracked for personalised advertising. The Information Commissioner’s Office (ICO) has previously issued clear guidance that organisations should make it as easy for users to “Decline All” advertising cookies as “Accept All”. This guidance states that websites can continue to show adverts when users decline all tracking, but should not tailor/personalise them to the person browsing.

The ICO has sent a letter to the companies that operate many of the UK’s most visited websites outlining its concerns and giving them 30 days to ensure that their websites comply with the law.

Stephen Almond, ICO Executive Director of Regulatory Risk, said in the letter: “We are all surprised to see hotel adverts that appear to have been designed specifically for us, for example when we book a flight abroad. Our research shows that many people are concerned about companies using personal data for targeted advertising without their consent. Gambling addicts may be targeted with betting offers based on browsing records, women may be targeted with sad baby adverts shortly after an abortion, and someone exploring their sexuality may be served adverts that reveal their sexual orientation. Many of the most prominent websites have got the practices right, but for companies that have not yet done so, we offer a clear choice: make the changes now or face the consequences.”

The ICO will provide an update on this work in January, including details of companies that have not addressed the concerns. The action will be part of wider work to ensure people’s rights are protected by the online advertising industry.

GRC LEGAL Comment

The unlawful cookie applications carried out by companies, which have adopted e-commerce as their main field of activity, through their websites within the framework of both GDPR and national legislation, expose individuals to targeted advertising and profiling activities and deeply undermine the notion of personal data protection, which is a fundamental right. The main reason why these activities, which have been prohibited by many Guidelines, Guidelines and decisions of administrative authorities, are still continuing is that the sanctions envisaged are not deterrent.

As a matter of fact, considering the revenues earned by these companies through cookie applications, the administrative fines they will be subject to are considered as acceptable risks

It is already a matter of curiosity to what extent the ICO, which sends a clear message to companies, will provide a deterrent sanction for those who do not comply with the law and how companies will shape their activities in order to protect users’ personal data in online environments.

Canada x LockBit

A data breach affecting Canadian government, military and police employees could involve 24 years of personal and financial information, officials said in a statement.

The Secretariat of the Treasury Board of Canada issued a statement warning anyone using services provided by government contractors SIRVA or BGRS Canada, two of the leading global companies in shipping and transport services since 1999, that their data may have been compromised. Meanwhile, the ransomware gang LockBit also claimed responsibility for the attack.

The SIRVA/BGRS breach was first made public on 20 October in a notice sent by government officials to military and civilian personnel, which contained little information about the scope of the attack. According to CBC, Canada’s public radio and television broadcaster, the notification came after transport services were disrupted and the BGRS website went offline on 29 September. Officials now say that information from personnel of Canada’s current and former government, the Canadian Armed Forces (“CAF”) and the Royal Canadian Mounted Police (“RCMP”) was likely involved in the breach.

“At this time, given the significant amount of data being assessed, we cannot yet identify the individuals affected,” the Treasury Board said in a statement. “However, preliminary information indicates that the breached information may belong to anyone who used transport services as far back as 1999 and may include financial information provided by employees to companies.”

The incident was also reported to the Canadian Cyber Security Centre, the Office of the Privacy Commissioner and the RCMP. The government is working with SIRVA/BGRS to investigate the incident and ensure that the vulnerabilities used in the attack are addressed. A spokesperson for the Treasury Board of Canada Secretariat told SC Media that they cannot provide further details, but will continue to share this information as it becomes available.

LockBit Claims to Have Stolen 1.5 TB of Data from the Canadian Government!

Ransomware gang LockBit claimed responsibility for the attack on the dark internet leak site on 17 October, according to a screenshot published by BleepingComputer. This group claims to have stolen more than 1.5 TB of documents and “3 full CRM backups” from SIRVA’s European, North American and Australian branches. As of 20 November, LockBit’s site stated that the group had released “all available data”. Government officials have not confirmed the identity of the perpetrator or the exact volume of data stolen in public statements.

The Canadian government has been using BGRS’s services since 1995. According to an overview on the BGRS website, the contractor facilitates the transport of more than 14,000 CAF members annually. The government has also contracted SIRVA Canada since 2009, according to government records. SIRVA and BGRS merged in August 2022. SC Media reached out to SIRVA for comment but did not receive a response.

The government is offering services, including credit monitoring and passport reissuance, to all current and former government, CAF and RCMP employees who have moved to BGRS or SIRVA Canada in the last 24 years. The authorities also urged potentially affected individuals to update their login and credentials, enable multi-factor authentication and monitor their online accounts for unusual activity.

McNee agreed with the government’s recommendation, adding that the age of the data requires some additional precautions. “We also recommend that citizens consider changing the answers to security or account recovery questions related to critical online accounts, as the ‘correct’ answers to questions such as ‘What street did you live on in 2005?

GRC LEGAL Comment

The scale of the data breach poses a serious threat to the privacy and security of individuals. The fact that public institutions such as the government, military and police organisations were affected by the data breach potentially endangers large segments of society. In addition, the fact that LockBit, a ransomware gang, took responsibility for the data breach that affected many people shows that cybercrime has become organised.

In this situation, it is necessary to underline that all kinds of measures should be taken to ensure data security and that these measures should be up-to-date and effective. In this context, we believe that the government’s proposed measures of updating user credentials, enabling multi-factor authentication and monitoring online accounts are the most fundamental steps that can be taken to increase personal data security.

Meta x Underage Users

Meta, the parent company of social media platforms such as Facebook and Instagram, faces serious allegations in an ongoing federal lawsuit. Disclosed court documents revealed that since at least 2019, Meta has been accused of knowingly allowing accounts belonging to children under the age of 13 to remain active and collecting their personal information without parental consent.

This revelation comes at a time of heightened concerns about online privacy and the protection of minors on social media.

Reports of Underage Users and Legal Consequences

33 state prosecutors stated that the company received more than one million reports of underage users on Instagram from various sources, including parents and community members, between early 2019 and mid-2023.

Despite these reports, Meta allegedly only deactivated a fraction of these accounts. This action or inaction forms the basis of the complaint. The 54-count lawsuit alleges that Meta violated a number of state-based consumer protection laws and the Children’s Online Privacy Protection Act (“COPPA”).

COPPA prohibits companies from collecting personal information from children under 13 without parental consent. The court document emphasises that Meta’s records demonstrate the importance of Instagram’s underage user base, as Instagram has a significant number of underage users and “millions of children under the age of 13” and “hundreds of thousands of teen users” spend extensive time on the platform.

Meta’s Response and the Challenge of Age Verification

In response to the accusations, Meta stated that Instagram’s terms of use prohibit users under the age of 13 and that the company has measures in place to remove such accounts.

However, Meta recognises the difficulty of verifying the age of online users, especially those without credentials. In a statement to CNN, Meta also emphasised that he supports the federal law requiring app stores to obtain parental consent for app downloads by youth under the age of 16.

According to Meta, this approach would reduce the need to obtain sensitive information such as ID cards for age verification. As the case progresses, it raises important concerns about online privacy, the protection of minors, and the responsibilities of tech giants in overseeing their platforms

With potential civil penalties of up to hundreds of millions of dollars, this case could set a precedent for how social media companies handle underage users and data privacy.

GRC LEGAL Comment

Meta, which owns platforms that host millions of users worldwide, is facing allegations of privacy violations against children, and we see that a global crisis has come to the agenda. In fact, Meta also realises that by banning users under the age of 13, it has not taken a measure that constitutes an example of good practice in terms of protecting children online. In this sense, this case may set a precedent for social media companies.

This news brings to the agenda again the fact that although there is a legislation on children’s personal data in international legislation with the GDPR and other regulations, there are no special provisions and/or Guidelines for children in our legislation. We hope that the Guideline for children, which is currently being worked on by the Authority, will bring decisive regulations that follow all these international legislation and developments and can guide the practice.

CNIL x Groupe Canal +

The French Data Protection Authority (“CNIL”) has announced a €600,000 fine against Groupe Canal+ over concerns raised about the media company’s direct marketing activities.

According to the CNIL, the company sent marketing emails to users without their consent, in breach of both the GDPR and the French Privacy Act. CNIL stated that the company sent marketing emails to individuals who had provided their personal information to one of Canal+’s partners and not to Canal+.

In doing so, they were not told that the information would be shared with and used by Canal+ for Canal+’s marketing activities. According to the CNIL, Groupe Canal+ should have ensured that group companies obtained the appropriate consent.

In addition, the judgement against the company included other alleged infringements under the GDPR. One such allegation related to the failure to disclose the data retention period in the company’s privacy policy. Other allegations included failure to provide privacy disclosures when contacting consumers by telephone and failure to respond to consumer requests for access within one month. The CNIL also noted that certain consumers’ access requests were not responded to.

In addition to data privacy concerns, the decision also emphasised data security concerns. According to the CNIL, the company failed to take appropriate security measures when storing employee passwords. In addition, the failure to notify the CNIL of subscriber data resulted in this data being visible to others for five hours.

GRC LEGAL Comment

Strict supervision and imposition of high administrative fines especially for media companies where marketing activities are carried out intensively will help to prevent data breaches arising from direct marketing activities that require explicit consent. In this sense, we hope that the fine of EUR 600,000 imposed on Groupe Canal+ will serve as a deterrent for all media companies.

This news brings to the agenda again the fact that although there is a legislation on children’s personal data in international legislation with GDPR and other regulations, there is no special provision and/or Guide for children in our legislation. We hope that the Guideline for children, which is currently being worked on by the Authority, will bring decisive regulations that follow all these international legislation and developments and can guide the practice.

Booking.com x Cyber Attackers

Cyber attackers are stepping up their attacks on Booking.com customers by posting adverts on dark web forums asking for help in finding victims.

Cyber attackers are offering people staying in hotels up to $2,000 (£1,600) for hotel login details. Since at least March, customers have been tricked into sending money to cyber attackers.

Cybersecurity experts say Booking.com itself has not been hacked, but cyber attackers have found ways to break into the management portals of hotels that use the service.

Researchers at cybersecurity company Secureworks say cyberattackers pretend to be a former guest who left their passport in their room, tricking hotel staff into downloading a malicious piece of software called Vidar Infostealer.

It then sends a Google Drive link that says it contains a picture of the passport. The link downloads the malware onto staff computers and automatically scans hotel computers for Booking.com access. This allows them to log into the Booking.com portal and view all customers who have booked a room or holiday, allowing cyber attackers to send messages to customers from the app to trick them into paying them instead of the hotel.

A Booking.com spokesperson said: “While this breach is not Booking.com, we recognise the severity for those affected, which is why our teams are working diligently to support our partners to secure their systems as quickly as possible and help those potentially affected. This includes recovering customers and lost funds.”

Cyber security expert Graham Cluley also said Booking.com hotels should implement multi-factor authentication to make it harder for criminals to gain illegal entry. “Booking.com has started showing a warning message at the bottom of chat windows, but they can do much more than that. For example, not allowing links to websites operating in less than a few days to be included in the chat will prevent the use of newly created fake sites to trick customers into paying.”

GRC LEGAL Comment

Considering that the customer information on Booking.com is accessed by attacking the hotels’ own management portals, it is important as an administrative measure that the data controller hotels provide awareness training on personal data for their employees. In addition, taking some technical measures that can prevent access to the links sent to the hotel staff for security reasons, even if they are clicked by the persons, will reinforce the actions regarding data security and prevent possible data breaches.

European Data Protection Board x Meta

Following the urgent binding decision of the European Data Protection Board (European Data Protection Board, “EDPB”) dated 27 October 2023, the Irish Data Protection Authority (Irish Data Protection Authority, “IE DPA”) prohibited the processing of personal data for the purpose of behavioural advertising on the basis of contractual and legitimate interest legal grounds in its decision against Meta Ireland Limited (“META IE”) on 10 November 2023.

Behavioural advertising refers to an advertising model that refers to the creation and delivery of personalised advertisements based on the online behaviour of internet users. This type of advertising aims to target ads tailored to users’ interests and behaviours by analysing data such as individuals’ online browsing habits, search terms, click history and other online interactions. However, behavioural advertising may also raise concerns about user privacy and data security. As such advertising models create personal profiles by processing user data, they represent an area that needs to be carefully managed in terms of how this data is used and protected.

The EDBP’s urgent binding decision follows a request from the Norwegian Data Protection Authority (Norwegian Data Protection Authority, “NO DPA”) for final measures to be taken on this issue that will be effective across the entire European Economic Area (EEA).

EDBP President Anu Talus said: “After careful consideration, the EDPB considered it necessary to instruct the IE DPA to impose an EEA-wide processing ban on Meta IE. In December 2022, the EDPB Binding Decisions had already clarified that the contract was not an appropriate legal basis for the processing of personal data carried out by Meta for behavioural advertising. In addition, Meta was found not to have complied with the orders issued by the IE DPA at the end of last year. This led to the use of the emergency procedure set out in Article 66(1) GDPR, which is an exception to the ordinary co-operation procedure and can only be used in exceptional circumstances.”

On 14 July 2023, the NO DPA issued a temporary ban under Article 66 of the GDPR against Meta IE and Facebook Norway AS (“Facebook Norway”) in relation to the processing of personal data of data subjects in Norway for behavioural advertising on the legal basis of contract or legitimate interest. This prohibition was valid for three months in terms of temporal and geographical scope and only within Norway. On 26 September 2023, the NO DPA submitted an urgent request for a binding decision to the EDPB to take final measures applicable to users in all EEA countries.

Information Corner

Pursuant to the urgency procedure regulated by Article 66 GDPR, data protection authorities may, in exceptional cases where they determine that urgent action is required to protect the rights and freedoms of data subjects within their territory, adopt provisional measures with legal effect within their territory for a maximum period of three months.

These measures are adopted by derogation from the GDPR’s consistency mechanism (Article 63 GDPR) or the Single Competent Authority mechanism (Article 60 GDPR). This derogation is designed to ensure that the authorities are always and under all circumstances in a position to protect the rights and freedoms of individuals in their Member State.

The data protection authority adopting such provisional measures must without delay notify these measures and the reasons for adopting them to the other data protection authorities concerned, the European Data Protection Board and the European Commission.

If the data protection authority which has adopted such interim measures finds that final measures are urgently needed, it may request an urgent opinion or an urgent binding decision from the EDPB, providing reasons for the urgent need to adopt final measures by derogation from standard co-operation and consistency procedures.

On the basis of the evidence obtained, the EDPB has concluded that META IE continues to infringe Article 6(1) of the GDPR (lawful grounds for processing) by improperly using contractual and legitimate interest legal grounds for the processing of personal data collected for the purposes of behavioural advertising and continues to infringe by failing to comply with decisions of data protection authorities, in particular the final decision of the IE DPA of December 2022.

Due to the urgency, the EDPB concluded that there is an urgent need to take final measures in light of the risks that regular co-operation mechanisms cannot be applied in the usual way and that serious and irreparable harm may be caused to data subjects without taking final measures.

The EDPB also found that the IE DPA failed to respond to the request for mutual assistance from the NO DPA within the deadline set out in the GDPR, noting that in this case the presumption of urgency set out in Article 61(8) of the GDPR applies, necessitating a departure from the usual mechanisms of co-operation and consistency.

Consequently, the EDPB considered that it was appropriate, proportionate and necessary for the IE DPA to order Meta IE to prohibit the processing of personal data collected on Meta products for the purposes of contractual and legitimate interest-based behavioural advertising, and that the final measures in respect of this process should be adopted by the IE DPA. This urgent binding decision was sent to the IE DPA, the NO DPA and the other relevant DPAs and the IE DPA concluded its final decision on the matter on 10 November 2023.

GRC LEGAL Comment

The urgency procedure prevented the unlawful execution of behavioural advertising activities, which allow the collection of highly sensitive personal data suitable for categorising/profiling the data subjects, by Meta by taking shelter behind the legal grounds of contract and legitimate interest under Article 6 of the GDPR. This case demonstrates the vital importance of a swift and effective response to GDPR infringements and interactions with national authorities at the European level. We hope that the EDPB’s decision will have a deterrent effect on Meta and set an example for other companies collecting data for behavioural advertising purposes.

Supreme Court of the European Union x Scoring Algorithms

On Thursday 7 December, the Court of Justice of the European Union (“CJEU”) ruled that decision-making by means of scoring systems using personal data is unlawful and thus, years after the entry into force of the GDPR, the CJEU published its first judgment on the article on automated individual decision-making. This judgement is expected to have significant implications for social security and credit institutions.

Between 2018 and 2021, the scandal that led to the resignation of Mark Rutte’s third government in the Netherlands stemmed from a faulty risk scoring algorithm that led tax authorities to accuse thousands of people of fraud in the childcare subsidy programme. The court ruled that automated scoring of any kind is prohibited if it significantly affects people’s lives. In the case of SCHUFA, Germany’s largest private credit institution, which rates people according to their creditworthiness with a score, the ruling states that SCHUFA’s scoring would breach the GDPR if SCHUFA’s customers (such as banks) attribute a “decisive” role to SCHUFA in their contractual decisions.

This judgement could have far-reaching consequences. In France, for example, the National Family Allowance Fund (“CNAF”) has used an automated risk scoring algorithm since 2010 to initiate home inspections on suspicion of potential fraud. According to Le Monde and Lighthouse Reports, CNAF’s data mining algorithm analyses and scores 13.8 million households per month to prioritise checks. This algorithm scores all beneficiaries between 0 and 1 each month, using around 40 criteria based on personal data to which a risk coefficient is attributed. The closer the beneficiaries’ score is to 1, the more likely they are to receive a home inspection.

Philippe Latombe, a French centrist MEP and member of the CNIL, told Euractiv that CNAF sees its algorithm as a risk assessment system, filtering people according to their data and using personal data for the purpose of “providing allowances to people who need them”. Latombe continued: “While each criterion, taken individually, may seem logical for anti-fraud purposes, the sum total can be discriminatory when the criteria are linked together.”

French MP Aurélien Taché commented: “As usual, the government is fighting the poor rather than poverty itself, and social scoring does not even respect the most basic principles of the defence of freedoms and the right to privacy.”

Scoring Restrictions

The GDPR exempts public and private organisations from using data mining algorithms in only three cases. These situations are the explicit consent of individuals, contractual necessity or obligations arising from the law. In this context, the transfer of personal data to scoring algorithms is now much more limited in the EU.

GRC LEGAL Reviews

Scoring algorithms, by combining many personal data, may not only lead to discrimination or victimisation of the person, but may also lead to serious consequences similar to the scandal in the Netherlands if used incorrectly. For this reason, guiding interventions such as the relevant court decision on how institutions and organisations should use such systems are important. We hope that this decision will be a driving force for both organisations and states to strengthen personal data protection standards and regulate similar algorithmic systems more carefully.