What’s Happening Around the World?

While the field of Data Protection is developing at an accelerating pace in our country, innovations around the world continue to remain on the radar of the Personal Data Protection Authority (“Authority”). From the examples we have repeatedly encountered, we witness that the Authority keeps pace with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC LEGAL, we closely follow the world agenda and present a selection of the latest news for your information with this content.

The news below belongs to May 2024.

Netherlands Publishes New Data Scraping Guidelines

The DPA has recently published a new guide on data scraping. The guide highlights the significant legal risks it poses to personal data and the restrictions required under the GDPR. Data scraping is the automated collection and storage of information from the internet. According to the DPA, data scraping by private organizations and individuals without specific consent violates the requirements of the GDPR. The DPA guidance establishes that the public availability of information does not mean that the data subject has given consent for any use of the data and clarifies that scraping is a new processing purpose that requires consent for that purpose.

The DPA concludes that the only valid legal basis under the GDPR for carrying out the scraping is the legitimate interest of the company. Furthermore, when assessing the circumstances in which a company may have a legitimate interest to carry out the scraping, the DPA finds that Scraping is almost always illegal. According to the DPA, there may be uses that can be made GDPR compliant, for example:

  • Public news websites.
  • Company-owned web pages.
  • Public online forums, etc.

The guidance published by the DPA reveals the growing interest of EU regulators in the practice of scraping. It remains unclear at this early stage whether other European regulators and the judiciary will adopt the view contained in the guidance.


Sensitivity towards unlawful processing of personal data is increasing day by day. Data scraping is also considered as an unlawful processing activity unless consent is obtained from the data subjects. As a matter of fact, the DPA supports this view and has published a guideline on the relevant issue and defines data scraping as mostly unlawful.

This guideline aims to prevent both malicious and illegal use. In this respect, it remains to be seen whether it will set a precedent for other EU countries.

Council of Europe Adopts First International Agreement on Artificial Intelligence

The Council of Europe has adopted the first international legally binding agreement aimed at ensuring respect for human rights, the rule of law and the legal standards of democracy in the use of artificial intelligence (Artificial Intelligence, “AI”) systems. The agreement, which is open to countries outside Europe, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they can pose while promoting responsible innovation. The agreement adopts a risk-based approach to the design, development, use and decommissioning of AI systems and requires careful consideration of the potential negative consequences of using AI systems.

Commenting on the agreement, Marija Pejčinović, Secretary General of the Council of Europe, said: “The Framework Agreement on Artificial Intelligence is a first-of-its-kind global agreement that will ensure that AI respects human rights. It is a response to the need for an international legal standard supported by states on different continents that share the same values to harness the benefits and mitigate the risks of AI. With this new agreement, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.”

The agreement is the result of two years of work by the Committee on Artificial Intelligence, an intergovernmental body bringing together 46 Council of Europe member states, the European Union and 11 non-member states, as well as representatives of the private sector, civil society and academia who participated as observers.

The agreement covers the use of AI systems in the public and private sectors. The treaty offers two ways for parties to comply with its principles and obligations when regulating the private sector; parties can choose to be directly obliged by the relevant treaty provisions or, alternatively, they can take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. This approach is necessary because of the differences in legal systems around the world.

The agreement sets transparency and oversight requirements tailored to specific contexts and risks, including the identification of content generated by AI systems. Parties will have to take measures to identify, assess and mitigate potential risks and assess the need for a moratorium, prohibition or other appropriate measures on the use of AI systems where their risks may be incompatible with human rights standards.

They will also have to ensure accountability and responsibility for adverse impacts and ensure that AI systems respect equality, including gender equality, non-discrimination and privacy rights. In addition, parties to the agreement would have to ensure the availability of legal remedies for victims of human rights violations related to the use of AI systems and procedural safeguards, including the notification of persons interacting with AI systems that they are interacting with such systems.

With regard to risks to democracy, the agreement requires parties to take measures to ensure that AI systems are not used to undermine democratic institutions and processes, including the principle of separation of powers, respect for judicial independence and access to justice.

Parties to the agreement will not be obliged to apply its provisions to activities related to the protection of national security interests, but will be obliged to ensure that these activities respect international law and democratic institutions and processes. The agreement shall not apply to national defense matters or research and development activities, except where the testing of artificial intelligence systems may have the potential to interfere with human rights, democracy or the rule of law.

Parties to the Agreement shall not be obliged to apply its provisions to activities related to the protection of national security interests, but shall be obliged to ensure that such activities respect international law and democratic institutions and processes.

The Agreement shall not apply to national defense matters or research and development activities, except where the testing of artificial intelligence systems may have the potential to interfere with human rights, democracy or the rule of law.

The agreement establishes a follow-up mechanism in the form of a Conference of the Parties to ensure its effective implementation.

Finally, the agreement requires each party to establish an independent oversight mechanism to oversee compliance with the agreement and to raise awareness, promote an informed public debate and conduct multi-stakeholder consultations on how AI technology should be used. The framework agreement will be opened for signature on September 5 in Vilnius (Lithuania) on the occasion of the Conference of Ministers of Justice.

GRC LEGAL Commentary

The international agreement adopted by the Council of Europe on AI systems can be considered as an important step towards the development and use of AI in accordance with ethical and legal standards. As can be seen, the relevant agreement aims to minimize the risks that AI may pose and establish a monitoring mechanism, while taking advantage of its potential benefits.

We believe that the establishment of a global standard will support the harmonized and responsible conduct of AI applications in different countries. To the extent that the agreement provides a strong foundation for the development of AI technology in a way that respects human rights, the rule of law and democratic values, we believe that it will encourage the use of AI in a way that benefits societies and in line with ethical principles.

European Union Trade Unions x Amazon

Trade unions from 11 European countries have written to data protection authorities across the bloc asking them to investigate Amazon’s data monitoring practices, according to a letter obtained by Euronews on May 7. Union leaders from European countries, including Austria, Germany, Ireland and Spain, where a significant number of workers work in Amazon’s warehouses, question the online marketplace’s use of surveillance and algorithmic management.

They argue that the tech giant’s use of hand scanners, activity tracking software, video cameras, GPS devices and other tracking technologies is having an impact on workers’ mental and physical health.

In 2021, Amazon was fined €746 million by the Luxembourg Data Protection Authority for processing personal data in violation of the GDPR. In December 2023, the French Data Protection Authority (“National on Informatics and Liberty”, “CNIL”) fined Amazon France Logistique €32 million for violating European Union (“EU”) data protection rules by creating an “overly intrusive system” to monitor employee activity and performance.

An Amazon spokesperson said that they strongly disagreed with the CNIL’s conclusions, which they said were factually incorrect. “We have filed an appeal with the Council of State. Warehouse management systems are the industry standard and are necessary to ensure the safety, quality and efficiency of operations and to monitor the storage of inventory and the preparation of packages on time and in line with customer expectations,” he said.

Following the latest developments, Oliver Roethig, Regional Secretary of UNI Europa, told Euronews that the labor management systems “undermine trust between workers and management, while at the same time systematically disregarding our privacy laws.” “It is high time to stand up and demand that these multinational companies respect workers’ personal data and their right to a dignified workplace. We need strong action now to ensure that our laws are fully enforced.”

An Amazon spokesperson said in a statement: “We are committed to using technology to enrich our employees’ experience, support them in their roles and help us serve our customers. We take data privacy seriously and believe our current policies and processes are in line with national laws and EU regulations.”


Amazon’s methods of monitoring the activities of its employees constitutes significant violations under the GDPR. The fine imposed on Amazon by the CNIL is an example of the sanctions to be imposed for violations of privacy and data protection rules in the workplace. While Amazon’s objections and defenses are ongoing, this case may set a precedent for data protection authorities in other European countries.

In our opinion, the initiation of similar investigations before other data protection authorities in line with the demands of labor unions may lead to stricter enforcement of data protection and privacy standards in workplaces and thus adopt methods that interfere less with the fundamental rights and freedoms of workers.

Dutch Data Protection Authority Says No Facial Recognition in Supermarkets

No facial recognition for supermarkets, but for nuclear power plants: The Dutch Data Protection Authority (“Data Protection Authority”, “DPA”) has published a guide explaining in which cases facial recognition can be used. In the document, the DPA lists the most common legal questions regarding facial recognition technology and the processing of biometric data.

The DPA reiterated that the introduction of facial recognition technology in supermarkets would be a violation of Dutch Privacy Laws. The issue came to the fore in 2020 when a Dutch supermarket attempted to implement biometric surveillance to catch shoplifters and others who could pose a threat, but was blocked by regulators.

Under the legal framework, the supermarket would have had to ask all customers for explicit permission to use facial recognition. However, according to the DPA, in practice this is almost impossible.

“The use of facial recognition is a significant violation of the privacy of all visitors, which outweighs the overriding private interests of the supermarket,” says the DPA.

Although the use of facial recognition is banned in most cases, the country makes exceptions. One of these is the use of the technology for authentication or security purposes, such as securing a nuclear power plant or securing dangerous goods. But for this, a data protection impact assessment must prove that the use of facial recognition technology is in the public interest.

The document defines the conditions under which facial recognition can be considered for “personal” or “household use”, in which case the GDPR provisions will not apply. One example is unlocking a phone with facial recognition. While this technology is permitted, the biometric data must be stored on the phone and the user must decide what happens to the data. The DPA says that other options other than biometrics should be offered to unlock the phone.

In addition, the DPA confirmed that the ban on processing special personal data, which includes biometric and genetic data, still applies if facial recognition is used to verify a person’s identity.


The DPA’s guidance on biometric data imposes strict restrictions and requirements on the use of facial recognition technology. The use of facial recognition technology in supermarkets is severely restricted due to the risk of large-scale and unauthorized collection of biometric data from customers. However, it is important that exceptions can be made in special circumstances, such as public safety and critical infrastructure protection, where appropriate assessments and public interest can be demonstrated. Similarly, in our country, the Guidelines on Issues to be Considered in the Processing of Biometric Data published by the Personal Data Protection Authority regulates the administrative and technical measures to be taken within the scope of data security in the processing of biometric data, and restrictions have been imposed on the use of biometric data.

Considering that the locks of the phones we use every day are unlocked with the provision of biometric data, it has become essential to increase the number of the aforementioned guides and to provide guidance with examples and to raise awareness among institutions and organizations engaged in biometric data processing activities.

Germany Uses Biological Data of 3 Million People for Testing Facial Recognition Systems!

Biometric tracking test data is raising ethical concerns in Germany. Even the tests for Germany’s adoption of real-time facial recognition are drawing criticism for failing to meet the country’s self-imposed data privacy and ethical standards. Following the announcement that German law enforcement agencies will use high-resolution cameras and live facial recognition to apprehend suspects, the country is witnessing growing concerns about the proliferation of biometric surveillance technologies.

This new development, which follows the introduction of the European Union’s Artificial Intelligence Law, is sparking new debates on privacy, civil liberties and the consequences of such systems. As the country navigates the delicate balance between security and individual rights, citizens and civil society organizations are expressing growing concern about the expanding scope of biometric monitoring.

In Germany, the revelation that the Federal Criminal Police Office (“BKA”) used images of nearly three million people to test facial recognition software has raised concerns about the legality and ethics of such practices. According to information obtained by News Television BR, the BKA’s actions raise questions about the limits of data use permitted by security authorities.

In 2019, the BKA reportedly received around five million facial images from the central police information system INPOL-Z for the purpose of a software test conducted by the Fraunhofer Institute for Graphical Data Processing. EGES, a project to improve the face recognition system at the BKA, aimed to evaluate the accuracy of face recognition algorithms from multiple manufacturers. It is noteworthy that the images used in the test were obtained from nearly three million people.

However, supporters of biometric tracking argue that these technologies are essential tools to enhance security and streamline processes in an increasingly digitalized world. They emphasize the potential benefits of biometric surveillance systems in crime prevention, identifying suspects, and increasing efficiency in sectors as diverse as transportation and healthcare.

Internal correspondence between the BKA and Federal Data Protection Commissioner Ulrich Kelber reveals efforts to legitimize the tests under the label of “scientific research”. However, doubts have been raised about the legality of the process and there have been calls for clearer regulations governing software testing by security authorities.


Since the law on the protection of personal data is a field that develops with technology, the concerns, problems and benefits it creates are becoming more and more on the agenda every day. This practice of German law enforcement agencies under the name of “security” is currently being tested and the processing of the biometric data of 5 million people, which is in the nature of sensitive personal data, even for the test phase, has caused justified concern in the German public opinion.

States are obliged to ensure the security of their citizens. One of the principles they should pay attention to in this regard is proportionality. In this context, it is necessary to make an assessment of the relevant practice in terms of both general legal rules and proportionality, which is a fundamental principle in personal data protection law. It can be said that it is not proportionate for the BKA to track 5 million and potentially all of its citizens through biometric data for the security of German citizens.

Ryanair Accused of Violating GDPR for Biometric Passenger Verification

Travel policy advocacy group EU Travel Tech has filed a formal complaint against airline Ryanair with the French and Belgian DPAs, challenging the airline’s new biometric data processing policy for customer verification.

The policy, which came into force on December 8, 2023, requires biometric verification for customers without existing accounts, including those who book through Online Travel Agencies (“OTAs”). These customers will need to submit live self-images or signature images along with their passport details in order to manage their bookings and check-in online.

EU Travel Tech argues that this requirement violates individual privacy and is contrary to the GDPR. The organization argues that Ryanair’s biometric verification process violates the GDPR’s principles of legality, fairness and transparency, particularly in relation to the handling of sensitive biometric data. EU Travel Tech members include leading OTAs such as airbnb, Booking.com and Expedia Group, as well as Amadeus.

The group has called on DPAs to urgently investigate Ryanair’s practices and impose interim measures to suspend the biometric verification process under Article 66 of the GDPR. The organization stresses the need for immediate action due to the potential harm to individuals’ rights and freedoms and suggests that a significant fine may be necessary, as provided for in Article 83 of the GDPR.

In addition to filing the complaint, EU Travel Tech, together with the European Associations of Travel Agents and Tour Operators and the European Federation of Travelers, sent a letter to the Vice-President of the European Commission, Věra Jourová, urging the Commission to investigate measures to ensure timely and robust implementation of the GDPR.

The coalition hopes that these actions will ensure a swift and decisive response to protect the data rights of individuals across Europe.


EU Travel Tech’s complaint sets an important precedent for the protection of the fundamental principles of the GDPR. At this point, Ryanair’s biometric verification policy should be carefully scrutinized by the DPAs against whom the complaint is lodged for compliance with the GDPR. Prompt and effective investigation by DPAs is essential for the protection of biometric data and the enforcement of sanctions in case of infringement of individuals’ rights. However, in our opinion, it is important for the European Commission to check whether such policies are in line with the GDPR and take measures where necessary.