What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps up with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC Legal Law Firm, we closely follow the world agenda and present a selection of the current developments for your information with this content.

The news below belongs to January 2023.

Google x Location Tracking

Google has settled two more location tracking lawsuits, worth $29.5 million, filed in the United States of America (United States of America, “US”) states of Washington DC and Indiana.

The search giant is expected to pay $9.5 million to Washington DC and $20 million to Indiana after the states sued Google for allegedly tracking users without their consent. The $29.5 million settlement adds to the $391.5 million Google agreed to pay 40 states last month over similar allegations.

DC Attorney General Karl Racine said, “My office reached a settlement with Google requiring the company to pay $9.5 million for deceiving and manipulating consumers. This manipulation included the use of dark patterns to access location data. We sued Google for making it virtually impossible to track users’ locations. Now, thanks to this settlement, Google is expected to disclose to consumers how their location data is collected, stored and used.”

What is a Dark Pattern?

Dark patterns are design elements that intentionally mislead, coerce, and/or encourage website visitors to make ambiguous and potentially harmful decisions. They take the form of deceptively labelled buttons, hard-to-reverse choices, and graphic elements such as colours and shadows that draw the user’s attention to or away from certain options.

The $391.5 million settlement with Google over location tracking apps, led by Oregon Attorney General Ellen Rosenblum and Nebraska Attorney General Doug Peterson, was the largest-ever attorney general-led consumer privacy settlement.

As part of negotiations with the attorneys general, in addition to the multi-million dollar settlement, Google agreed to significantly improve location tracking disclosures and user controls starting in 2023.

Google said in a statement that the lawsuit relates to an issue that has already been resolved and remains in the past, it will also begin providing more “granular” information about the data it collects during the account setup process, and has launched a new transition to turn off and delete your location history, web and app activity in one simple flow.

Apple x CNIL

France’s data protection authority (Commission nationale de l’informatique et des libertés, “CNIL”) fined Apple €8 million for privacy violations. The CNIL found that the US technology giant “did not obtain the consent of French iPhone users (version iOS 14.6) before depositing and/or writing identifiers used for advertising purposes on its terminals”.

The case stems from a complaint filed in March 2021 by the startup lobby France Digitale, which alleged that Apple failed to comply with data protection rules.

Apple, a self-proclaimed privacy champion, last year introduced App Tracking Transparency, a feature that asks users for their consent to be tracked online by third parties for targeted advertising purposes.

The CNIL’s restricted committee, a six-person group that decides on privacy fines, decided in mid-December to go beyond the recommendations of its regulatory rapporteur, who advocated a fine of €6 million. However, the rapporteur said at the time that Apple had collected approval in iOS 15 in accordance with the law.

In a statement, an Apple spokesperson said Apple was “disappointed” and would appeal the decision.

Microsoft x AI

Microsoft plans to implement OpenAI artificial intelligence technologies in all Office applications such as Word, Outlook and PowerPoint.

It is believed that thanks to AI, users will be able to add automatic text sections to documents based on a note. In addition, it is stated that AI can be used to create automated emails based on the information the user chooses to forward to the recipient.

In 2019, Microsoft invested $ 1 billion in OpenAI to develop new technologies for its products. In 2021, Copilot was introduced, a tool developed in collaboration with OpenAI that helps programmers write code.

Recently, news appeared online that Microsoft may use OpenAI’s artificial intelligence bot ChatGPT to present Bing search results in natural language instead of a list of links.

According to employees, Microsoft’s plans include integrating the same tools into its Microsoft 365 office suite to improve productivity. A source told The Information that the company has been developing personalised tools for email and document creation for more than a year and has been developing machine learning methods based on customer data.

For the successful use of artificial intelligence technologies, it is important that a number of requirements are met. For example, ChatGPT cannot always be guaranteed to provide accurate results, as it does not continuously scan the web for news or updates. If Microsoft’s AI text generation tools provide inaccurate or offensive information, it may be inevitable that people will stop using them.

The necessary data protection must also be guaranteed so that AI can be configured securely for individual customers without the risk of unauthorised access to data. Microsoft is working on privacy protection methods for OpenAI GPT-3 (Generative Pre-trained Transformer 3) and GPT-4 natural language processing algorithms, the source said.

Although making life, especially business life, more practical with artificial intelligence is very exciting for today, it is a matter of curiosity for the moment how the concerns it brings with it will be resolved.

Meta x IDPC

The Irish Data Protection Authority (“IDPC”) has issued its final ruling on Meta’s unlawful processing of user data for personalised advertising. The IDPC is in a major conflict with European Union authorities.

Although Austria, Germany, France, Italy, the Netherlands, Norway, Norway, Poland, Portugal and Sweden have lodged formal appeals against the decision, the Irish Data Protection Authority (EDPB) has responded to the European Data Protection Board’s (“EDPB”) ruling.

It is said that this decision may not finalise the case. It is stated that issues such as the use of personal data to improve the Facebook platform or for personalised content are not covered in the decision and do not fully address the complaints made by Noyb. While the EDPB requested an additional investigation, it is stated that the IDPC’s limitation of the scope of the complaint under Irish law is not in accordance with the law.

For actual violations of users’ rights, a minimum fine is on the agenda. While the EDPB requested a much higher fine, the IDPC decided on the final figures. While a total of 150 million euros was fined for the transparency problem, the fact that only 60 million euros was fined for processing the data of millions of European users for nearly five years without any legal basis confused the minds.

API x Car Manufacturers

It was announced that millions of vehicles of 16 car manufacturers, including BMW, Mercedes and Toyota, were affected by API (Application Programming Interface) vulnerabilities, allowing cyber attackers to remotely control and monitor vehicles and leak personal information.

According to security researcher Sam Curry and his team, cyber attackers who discover API vulnerabilities can remotely honk the horn, switch on the flasher, remotely track location, lock/unlock the vehicle, and start/stop the vehicle. It is also possible to compromise the accounts of millions of car manufacturers and dealers, gain administrative access to internal systems, hijack fleets and access customer and employee information.

Researchers have discovered at least 20 API vulnerabilities in various car models from 16 manufacturers. To mention some of these vulnerabilities:

In Mercedes-Benz vehicles, it is possible to access internal applications via misconfigured SSO (Single-Sign-On). SSO vulnerabilities can allow cyber attackers to access Mercedes-Benz’s GitHub repositories, build servers such as SonarQube and Jenkins, internal chat tools, and access internal cloud deployment services by joining almost all channels.

Remote Code Execution (“RCE”) can be performed on a large number of systems, triggering memory leaks that expose employee and customer information and compromise accounts. SSO vulnerabilities at BMW and Rolls Royce could allow hackers to access any application, including applications used by remote employees and dealers. For example, hackers could access internal dealer portals, query any vehicle chassis (Vehicle Identification Number, “VIN”) number, and retrieve BMW’s sales documents.

Other API vulnerabilities at Ferrari could allow hackers to take over any Ferrari customer account and access customer records. Bypassing access controls in the Ferrari APIs could also allow hackers to create, modify, and delete running back-office administrator user accounts. Furthermore, an attacker could add HTTP routes to Ferrari’s API host (api.ferrari.com) and discover associated confidential information.

Cyber attackers can obtain the location of any Porsche vehicle, send control commands and access customer information. Name, phone number, physical address, vehicle information and password hash can be remotely accessed in Jaguar, Landrover and other vehicles, and customers’ digital number plates can be overwritten in Reviver vehicles.

The researchers also found that Honda, Infiniti, Nissan, Acura and other “normal” vehicles have API vulnerabilities that could allow cyber attackers to fully remotely lock, unlock, start the engine, stop the engine, pinpoint precise location, flash headlights and honk the horn using only the vehicle chassis number.

Again, according to the researchers, a cyber attacker could remotely access KIA’s 360-view camera and watch live footage from the vehicle.

The researchers attributed the widespread API vulnerabilities to various automakers using systems with nearly identical functionality over the past five years. This discovery showed that automakers are in a rush to implement applications to gain a foothold in the smart car industry.

According to Jason Kent of Cequence Security, automakers almost never test their applications. In the statements made, it was stated that car manufacturers patch API vulnerabilities and make them no longer usable. API vulnerabilities will probably become a topic that we will start to hear frequently in the near future.

Meta x Voyager Labs

Meta has taken action to ban Voyager Labs, a provider of advanced artificial intelligence-based research solutions, from using Facebook and Instagram, alleging that the company collected 600,000 user data through fake accounts.

In a recent complaint, a request was made to permanently ban Voyager Labs from accessing Meta systems. The Guardian’s investigation revealed that the company partnered with the Los Angeles Police Department (“LAPD”) in 2019, claiming that it could predict future offenders using social media information.

Public records obtained by the nonprofit Brennan Center for Justice and shared with the Guardian in 2021 showed that Voyager’s services enabled police to spy on and investigate people by reconstructing digital lives and making assumptions about their activities, including their networks of friends.

One internal recording suggested that Voyager considered using an Instagram name that displayed Arab pride or tweeting about Islam as potential signs of extremism.

The lawsuit, filed in California Federal Court, detailed the activities Meta said it uncovered in July 2022, alleging that Voyager used surveillance software based on fake accounts to collect data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. According to the complaint, Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friend lists, photos, comments, and information from groups and pages.

Affected users included employees of non-profit organisations, universities, media outlets, healthcare facilities, the US armed forces and local, state and federal government agencies, as well as retirees and union members, Meta’s filing said. It is unclear who Voyager’s customers were at the time and to which organisations it may have transferred data. However, Voyager, which has offices in the US, the UK, Israel, Singapore and the United Arab Emirates, allegedly designed its software to hide its existence from Meta and sold the data it obtained for profit.

It is known that in November 2021, after the internal records were revealed, Facebook sent a notice to the LAPD, demanding that it stop all social media surveillance use of “fake” accounts and stating that fake accounts “violate its real names policy”. While it is unclear whether the LAPD used the fake profile feature when working with Voyager, the email correspondence revealed that police officers described the system as “a great function” and “a must-have service”.

Telecommunications x JV

European telecommunications firms are planning to form a joint venture that would offer regional mobile network users the option of “personalised” ad targeting, following trials in Germany last year. It is not yet clear whether European Union regulators will sign off on their plans.

In a dossier submitted to the competition division of the European Commission (“EC”), Deutsche Telekom of Germany, Orange of France, Telefónica of Spain and Vodafone of the United Kingdom put forward the idea of creating a joint venture, jointly controlled and equally owned, to offer a “privacy-focused, digital identification solution”. The venture is focused on supporting the digital marketing and advertising activities of brands and publishers with what they describe as a “first-party” data ad targeting infrastructure.

The EC has until 10 February to decide whether to clear the joint venture and thus allow the firms to proceed with the commercial launch.

A spokesperson for Vodafone said that the telcos were not in a position to comment on the intended joint venture at this stage, when the EC is considering whether to approve the initiative.

Details of the firms’ plan to move to personalised ad targeting emerged during initial trials in Germany in the summer of 2022. The technology was later described as a “cross-operator infrastructure for digital advertising and digital marketing”, and Vodafone said it would rely on user consent for data processing. The project was nicknamed “TrustPid”.

The telco’s ad targeting proposal quickly came under the radar of a privacy watchdog, which raised concerns about the legal basis for processing mobile users’ data for adverts. The project also attracted early interest from data protection authorities in Germany and Spain.

The interaction with regulators was said to have led to some fine-tuning of how telcos proposed to obtain consent to make the process more open.

The 6 January 2023 telcos’ filing submission confirms that “explicit user consent” (via opt-in) is the intended legal basis for targeting. When discussing their approach, a representative of one of the telcos involved (Vodafone) confirmed that the aim was to rely on obtaining consent from users through pop-ups.

For now, a first-party data-based alternative to the still ubiquitous tracking cookie requires a legal basis for processing people’s data for marketing, and alternatives that seek consent seem increasingly difficult to gain acceptance.

Constant guidance from EU data protection regulators, such as the large fine Meta received for trying to claim contractual necessity for processing user data for advertising, or the warnings TikTok received when it tried to move from consent to a legitimate interest claim for its “personalised ads”, is forcing companies to backtrack.

If the telcos’ joint venture gets the green light from the EC, scrutiny on the project will of course accelerate and close attention to technical details will raise new concerns. It is therefore too early to decide whether the initiative will be accepted by regulators and privacy experts.

Inconvenience could also be raised if mobile network users themselves suddenly find that they are confronted with an intrusive layer of permissions while surfing the web; after all, since they pay telecoms companies to provide them with services, tolerance is likely to be low.

Moreover, persuading mobile users to choose to see adverts by giving them the option to opt-out of truly free, fair/non-manipulative tracking without the use of dark models is a major hurdle. Many people will refuse tracking if they are actually asked the question.

Therefore, even if telcos were allowed to create their adverts targeting joint ventures, there is no guarantee that mobile users on their networks would accept it. Still, if it happens, brands may have a chance to win over web users with a new approach.

Being transparently upfront about wanting to process people’s personal data for adverts offers an opportunity to do things differently against a daunting status quo that fails to clearly explain how people’s data is being abused, at what point the processing may have stopped, or what is actually happening.


In a recent decision, the Spanish Data Protection Authority (Agencia Espanola Proteccion Datos, “AEPD”) became the first European Union data protection authority to reject one of the complaints filed by the privacy activist organisation NOYB against 101 European companies regarding their use of the Google Analytics tool.

The AEPD’s decision differs from the position previously taken by the Austrian, French, Italian and Danish authorities. The AEPD’s decision concerns a complaint against the use of Google Analytics on the website of the Royal Spanish Academy (Real Academia Española, “RAE”) – a public, non-profit organisation tasked with preserving the Spanish language.

By the RAE in response to the NOYB complaint:

The RAE’s sole and exclusive purpose in using this tool is to fulfil its mission to guarantee the development and preservation of the Spanish language, rather than for any commercial gain,

That the use of tools to collect statistical data is crucial for the RAE to fulfil its purpose,

That the RAE only has access to aggregated, statistical data and does not have access to users’ IP addresses,

That no personal data is processed, that the only information that could identify a user would be a random ID that Google gives to its users, and that based on this information the RAE cannot re-identify a user,

On 3 December 2020, the use of the Google Analytics tool was discontinued,

arguments were defended. In response to the arguments put forward by RAE, the AEPD ruled that there was no evidence that RAE had breached the GDPR and dismissed NOYB’s complaint.

The AEPD’s decision shows that the mere use of the Google Analytics tool is not automatically considered a violation of the GDPR. While the AEPD did not provide a detailed analysis of the specific reasons for rejecting the complaint, the decision may signal a shift in the divergent perspectives among EU data protection authorities on the level of risk associated with the use of the Google Analytics tool. The dominant view among EU data protection authorities remains against the use of Google Analytics.

WhatsApp x IDPC

WhatsApp has been fined €5.5 million by the Irish Data Protection Commissioner (IDPC) for breaching data protection law. The fine was similar to the €390 million fines levied against WhatsApp sister companies Facebook and Instagram for forcing users to accept changes to their terms of service.

However, the IDPC said that the fine was kept at a relatively low level as it had already fined WhatsApp €225 million “for breaching this and other transparency obligations within the same period”, so it “will not propose any sanctions in terms of corrective measures” and all 47 European regulators agreed with this element of the IDPC’s draft decision. WhatsApp was also given six months to harmonise its data processing structures with the GDPR.

While the IDPC had opposed WhatsApp for its lack of transparency, it had not opposed the fact that the tech giant could “rely on a contract” on legal grounds. When this view was discussed among other EU privacy regulators, many objected to the IDPC’s point of view. The matter was referred to the European Data Protection Board (EDPB), which agreed that “contract” cannot be relied on as a means of legitimising personal data processing.

“WhatsApp led the industry in private messaging by providing end-to-end layers of encryption and privacy that protect people. We strongly believe that the way the service works is both technically and legally compliant. We rely on the contractual requirement for service improvement and security purposes because we believe it is a fundamental responsibility to keep people safe and provide an innovative product in the way we operate our service. We disagree with the decision and plan to appeal.”

The IDPC also rebuffed the EDPB’s efforts to force WhatsApp to conduct a new investigation. The EDPB claimed that it had directed the IDPC to conduct a new investigation that would cover all processing operations on its service to determine whether WhatsApp Ireland processes special categories of personal data, whether it processes data for behavioural advertising purposes, for marketing purposes. At the point where the directive may involve the EDPB exceeding its competence, the IDPC considers it appropriate to bring an annulment action in the Court of Justice of the European Union to request the annulment of the directive.

PayPal x Cyber Attack

PayPal issued a notification on 18 January about unauthorised access to thousands of PayPal users’ accounts between 6-8 December 2022. It was reported that the number of unauthorised accounts accessed was 34,942 on average and that the cyber attack took place in the form of a credential stuffing attack.

What is a Credential Stuffing Attack?

A credential stuffing attack is a type of cyber attack that occurs when a threat actor captures reused credentials between accounts using an automated process and accesses user accounts on platforms by repeating these credentials on vulnerable platforms.

In the official notification sent to all affected account holders, it was stated that the confirmation of the attacks was made on 20 December. PayPal stated that there is no information that personal information was misused or unauthorised transactions were made on the accounts as a result of this incident, while it was also emphasised that access to the affected accounts was eliminated for unauthorised third parties.

Although no definitive explanation was made, it was reported that cyber attackers may have accessed personal data such as name, address, social security number, date of birth. After this incident, PayPal offered two years of free access to identity monitoring services provided by Equifax to its customers whose accounts were affected. All users were also advised to change similar passwords used on different platforms, switch to unique, strong passwords and activate two-factor authentication.

European Court of Justice Judgement

The European Court of Justice (“ECJ”) issued a landmark judgement granting data subjects the right to know with whom their data is shared. In the decision, it was emphasised that companies are obliged to share the channels through which they transfer personal data upon the request of the data subject and the importance of enabling people to make informed decisions about the information they provide, and that this is an extension and requirement of the principle of transparency.

What Happened?

After the Austrian postal service, which keeps a large database on Austrian citizens/residents, shared the information with third parties, data subjects requested information about the third-party data recipients in question.

The Austrian postal service refused to disclose this information and responded to the data subjects’ requests for access rights by listing only “categories of recipients” such as “advertisers trading through mail order and stationery outlets, IT companies, mailing list providers and associations such as charities, non-governmental organisations (NGOs) or political parties”.

The response through this listing raised questions before the Austrian Supreme Court as to whether the application fulfils the requirements of Article 15 GDPR.

In its assessments, the ECJ stated that Article 15 of the GDPR refers to the right of every individual to access data concerning him or her, as enshrined in Article 8(2) of the Charter of Fundamental Rights of the European Union (Charter of Fundamental Rights of the European Union, “CFREU”).

Stating that the only way for data subjects to exercise these rights is to have access to the names of the recipients, the ECJ stated that rights such as checking whether the data are processed in accordance with the law, investigating whether the data are disclosed to the authorised recipients, exercising the rights to rectification, erasure or restriction of processing cannot be exercised effectively if the recipient information is not available and that the recipient groups should be listed clearly and in detail instead of being listed in general terms.

In addition to all these considerations, the court also stated that this right may be limited in certain circumstances, although it did not provide detailed explanations. Leaving the burden of proof regarding these limitations on the data controller, the ECJ explained the situations in which limitations can be made as follows

Impossibility; if the recipients are not specific, for example, if the data can only be transferred in the future or to pursue claims, it may be possible that detailed information about the data recipients cannot be provided.

The question raised by the Advocate General of the European Court of Justice as to whether a data controller can claim impossibility where the identification of the recipient would require a disproportionate effort was left unanswered by the ECJ.

Where the controller demonstrates that the data subject’s requests for access are manifestly unfounded; “manifestly” literally means that something is obvious to any reasonable person. Therefore, requests will only be considered unfounded when the ‘unfoundedness’ is manifest in a real sense, for example when the request is made by an unauthorised third party.

Where the controller demonstrates that the data subject’s access requests are “excessive”; Excessiveness may be the case where requests are made more frequently than necessary to obtain the information required. In addition, requests made for reasons incompatible with the data subject rights set out in the GDPR, such as in support of a civil action, may also potentially be considered excessive.