What’s Happening in the World?
While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).
From the examples we have repeatedly encountered before, we witness that the Authority keeps up with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.
As GRC Legal Law Firm, we closely follow the world agenda and present a selection of the current developments for your information with this content.
The news below belongs to October 2022.
USA x EU
The President of the United States of America (“US”), Joe Biden, has issued a decree to implement the commitments made by the European Union (“EU”) under the “US-EU Data Privacy Framework Agreement” designed to address its privacy concerns, which also allows the US government for the first time to flag problematic issues related to European surveillance programmes.
The decree, recently signed by President Joe Biden, establishes a Data Protection Review Court within the Department of Justice, which allows EU citizens to sue over how their data is collected and used by US intelligence agencies. At the same time, the data collection activities of intelligence agencies were made subject to only necessary and proportionate uses.
Peter Harrell, Senior Director for International Economics and Competitiveness at the White House National Security Council (“NSC”), said in a statement that while the main focus of Joe Biden’s decree is to ensure the continuation of data transfers between the EU and the US while companies meet the standards set by the Court of Justice of the European Union in 2020, Joe Biden’s privacy framework will extend these privacy rights to US citizens.
Peter Harrell also said that the US Attorney General would need to designate the EU as a qualifying state or territory under the privacy framework, which would open the door for the US government to also consider EU surveillance measures.
The move is a serious win for the US government, which has long recognised that Brussels holds all the cards in data flow negotiations and that its national security laws are held to an even higher standard than the EU’s regulations. Under the new framework, the US will be able to stop the use of the redress mechanism by countries or regions that do not meet its standards.
Peter Harrell said that the decision on the EU’s application of the designation was taken by the Attorney General to ensure that the laws of the EU and/or each of its member states on matters falling within their jurisdiction include appropriate safeguards regarding their own signals intelligence for personal information of US persons transferred from the US to the EU.
In designating countries or territories as “qualified” for the dispute settlement mechanism, the Attorney General said he would also consider other factors, such as whether they are compatible with US national interests. A European Commission official recognised the need for the designation exercise to offer its own safeguards, if not be an assessment in itself.
“The dispute settlement mechanism is open to countries and regional organisations that offer appropriate security measures and have data transfer arrangements with the US,” the NSC and other Joe Biden administration officials said, but the NSC and other Joe Biden administration officials are confident that the European Commission will approve the decree and counter legal challenges from privacy advocates.
Max Schrems, a privacy advocate who filed lawsuits in 2015 and 2020 that overturned the Privacy Shield, told a national news agency that he had reviewed the details of Joe Biden’s decree and wanted to prepare for a potential challenge. “Since there is no change in mass surveillance, I predict that this will go back to the Court of Justice of the European Union,” Max Schrems said.
The decree will probably not be approved by Brussels until March 2023, but the signature will be a significant benefit for businesses that share data between the two countries. Companies will be able to use the signed decree as a legal basis for data transfers between the US and the EU even before it is approved by the European Commission. This will also benefit companies such as Facebook, which is expected to be prevented from sending EU data to the US without appropriate privacy protections.
“The US government has made swift legislative changes that will be important in improving business reliability,” said John Miller, Senior Vice President for Policy at the Information Technology Industry Council.
GDPR x United Kingdom
Recent announcements by the British Government that it will replace the GDPR and stop the Data Reform Act have raised new questions about the UK’s (“UK”) EU data equivalence.
Dr Sam De Silva, chairman of The Chartered Institute for IT Law (“BCS”) and a partner at international law firm CMS, has warned UK businesses about the potential for finding themselves in the position of ‘complying with both regulatory regimes’ following the legislation.
“The UK currently has the benefit of an EU adequacy decision allowing the free flow of personal data from the EU to the UK. Nevertheless, the EU Commission needs to continuously monitor developments in UK law to assess whether the UK still meets the ‘essential equivalence’.”
This means that any significant deviation from the GDPR would put the UK at risk of losing its adequacy. Interestingly, Michelle Donelan, minister of the Department for Digital, Culture, Media & Sport (“DCMS”), has made it clear in her recent speech that her intention is to protect the UK’s competence, but Dr De Silva said that if the government plans to move away from the GDPR altogether, it is not clear how this will be possible in practice.
Dr De Silva; “We need more detail about what this means in practice. One of the interpretations is that there is no plan in UK law to protect or retain any aspect of the GDPR and that is why the Data Reform Act has now been repealed and the reason for this is that the bill appears to change the GDPR only in certain areas.”
This suggests that the government wants a “light touch” approach to regulation, but it is not clear what this means in practice. For example, is UK law still “visible and palpable” in terms of content and structure, i.e. different obligations for controllers and processors, certain individual rights and the need for accountability? Or will the government propose something completely new? Most UK businesses have worked with the GDPR for more than 4 years and again most have spent a significant amount of time and money building and operating their compliance programme.
UK businesses with customers in the EU will have to continue to comply with GDPR regardless of the new UK laws in place, but the risk for UK businesses is that they may have to comply with both regulatory regimes. It is hoped that the majority of businesses will opt to comply with the stricter rules.
Dr De Silva said that profit losses are often cited by the government (according to a report by Oxford University) as a reason to repeal GDPR, but urged caution about this inference for 3 reasons: The negative effects we have observed on firm performance may partly reflect temporary compliance costs. If the GDPR gradually becomes a global standard as more countries adopt similar regulations, companies targeting EU companies will be less disadvantaged over time. Any calculated assessment appears to be silent on the total welfare effects, which are likely to take into account potential benefits to citizens with data protection concerns.
Google x Incognito Mode
Despite Google’s persistent praise of the incognito mode in its Chrome web browser, the feature is allegedly just a joke among the company’s engineers.
According to Bloomberg’s report, Google engineers were making fun of the incognito mode until 2018 and, in their opinion, this was due to the icography (the visual images and symbols used), or more precisely, that the incognito mode did an inadequate and poor job of actually improving the level of “privacy” it provided.
In a correspondence shared by Bloomberg, a Google engineer reportedly stated that the Company should change both the spy man icon and the name incognito because they gave users the wrong impression. The root of the problem is that users who do not fully grasp the technology believe that their browsers are much more protected and invisible than they actually are when they see incognito and the icon.
Another employee responded with an image of Guy Incognito looking like Homer Simpson, stating that the icon should look more like the Simpsons character because it “…accurately conveys the level of privacy”.
When you use Chrome’s incognito tab mode, you prevent Chrome from recording your search session, just as you prevent the browser from recording your search history and other private data. In other words, using Chrome with incognito mode turned on protects more of your private space. For example, on a computer you share with your roommate, it can be used to make it harder for them to snoop through your search history.
Although incognito is a mode that offers privacy, it doesn’t mask your IP address, location, or other potentially identifiable data. It is not a VPN, and in fact its primary purpose is to hide your activity on a small, purely local scale. Even so, while incognito mode won’t track your history, it will still keep a record of everything you bookmark and download.
The correspondence in question surfaced as part of a lawsuit against Google that is aiming for class action status over allegations that the Company collected users’ data even when they were using incognito mode, and more importantly, that the Company allegedly led these users to believe that their information was protected. In related news, a spokesperson for Google told Bloomberg that the Company is “clear about how it works and what it does” regarding incognito mode.
Meta x Quest Pro
In November 2021, Facebook announced that it would delete facial recognition data from images of more than 1 billion people and stop offering the ability to automatically tag people in photos and videos.
Luke Stark, Assistant Professor at Western University in Canada, told WIRED magazine at the time that he saw the policy change as a PR tactic because Meta’s virtual reality initiative would likely lead to expanded physiological data collection and raise new privacy concerns.
Meta, the company that founded Facebook, unveiled its latest virtual reality headset, the Quest Pro, proving Stark’s prediction correct. The new model tracks eye movements and facial expressions with a set of five inward-facing cameras, allowing an avatar to simultaneously project smiles, blinks or raised eyebrows. With five external cameras to assist, the aim is to add legs to avatars in the future to replicate a person’s full body movements in the real world.
After Meta’s presentation, Stark said that the result was predictable and that he suspected that the default “off” setting for face tracking would not last long, “It has been clear for several years that animated avatars have a significant impact on loss of privacy. This data is much more detailed and much more personal than the image of a face in a photograph.”
At the event announcing the new virtual reality glasses, Meta’s CEO Mark Zuckerberg described collecting a new kind of private data that would emphasise intimacy as a necessary part of the virtual reality vision. “When we communicate, all of our non-verbal expressions and gestures are often more important than what we say, and the way we connect should reflect that virtually,” he reasoned.
Zuckerberg also said that the built-in cameras combined with the cameras in Quest Pro’s controllers will power photorealistic avatars that look more like a real person and less like a cartoon. No timeline was provided for the release of this feature. A virtual realistic selfie of Zuckerberg’s cartoonish avatar, which he later admitted was “basic”, was used for humour this summer. Thus, the changes to the avatars were announced.
Although there is no evidence that the technology could work, companies including Amazon and various research projects have previously used traditional facial photographs to try to predict a person’s emotional state.
Data from Meta’s new virtual reality glasses could provide a new way of understanding a person’s interests or reactions to content. While Meta is in the process of experimenting with shopping in virtual reality, it has filed patents in the Metaverse that envisage personalised ads and media content tailored in response to a person’s facial expressions.
Meta product manager Nick Ontiveros explained to reporters that Meta does not use this information to predict emotions. The raw images and pictures used to power these features are stored in the virtual reality headset, processed locally on the device, and deleted after processing, Meta says. Although the raw images are deleted, the information derived from those images can be processed and stored on Meta servers, according to eye-tracking and facial expression privacy notices issued by the company.
This data about the Quest Pro user’s facial and eye movements may also be released to companies other than Meta. A new motion software development kit (Software Development Kit, “SDK”) will give outside developers access to abstracted gaze and facial expression data to animate avatars and characters. Meta’s privacy policy for the virtual reality headset states that data shared with outside services will be subject to their own terms and privacy policies.
The expression-capturing technology already works in photo apps and iPhone memoji, but Meta said in a statement that if body language is captured simultaneously, people could use virtual reality glasses to attend meetings or conduct business.
Meta announced that it will soon integrate Microsoft productivity software, including Teams and Microsoft 365, into its virtual reality platform. Autodesk and Adobe are working on virtual reality applications for designers and engineers, and an integration with Zoom will allow people to participate in video meetings as Meta avatars.
Quest Pro’s success may depend on whether people will buy hardware with new data collection capabilities from a company like Cambridge Analytica, which has failed to protect user data or monitor the activities of third-party developers with access to its platform. This could add to the difficulties Mark Zuckerberg has in selling his vision of the metaverse.
Meta reported no more than 300,000 monthly active users for its social virtual reality platform Horizon Worlds. The New York Times recently reported that even Meta employees working on the project are using Horizon World very little.
Avi Bar-Zeev, a virtual and augmented reality consultant who helped create Microsoft’s HoloLens virtual reality headset, explained that Meta should be given its due for deleting images from the cameras in QuestPro. He also noted that there were serious privacy issues about Quest 2, the older version of the device in 2020, and that he felt the same way about Quest Pro.
Bar-Zeev stated that data on how people react to content or experiences will be obtained from facial and eye movements, so that Meta or other companies can lead to emotional exploitation of people in virtual reality with this data, “My concern is not that we will be presented with a lot of ads that we hate, they will learn so much about us that they will present us with a lot of ads that we love and we will not even know that they are ads.”
Kavya Pearlman, founder of the XR Safety Initiative, accessed a demo of the Quest Pro before its release and noted that the screens that direct users to activate face and eye tracking have a dark format designed to encourage people to adopt the technology. The US Federal Trade Commission (FTC) issued a report last month advising companies not to use designs that distort privacy options.
Pearlman said, “In my past experience, we are on a very dangerous path, and if we are not careful, our autonomy, free will and agency will be at risk. Companies working on virtual reality need to publicly discuss what data they collect and share, and set strict limits on the inferences they can make about people.”
Shein x Data breach
Shein, an e-commerce platform in the ultra-fast fashion sector founded in China and recently moved its core assets to Singapore, has come under scrutiny for a data breach in 2018 as it continues to conquer Generation Z markets around the world.
Zoetop, the company that owns Shein and its sister brand Romwe, was fined $1.9 million by New York for failing to properly address a security breach, according to a statement from the state Attorney General’s Office (Attorney General, “AG”).
According to the AG’s statement, a cybersecurity attack in 2018 resulted in the theft of 39 million Shein account credentials, including those of more than 375,000 New York residents. An investigation by the AG’s office found that Zoetop only communicated with “a fraction” of the 39 million compromised accounts, and for the vast majority of affected users, the firm failed to alert them that their login credentials had been stolen.
The AG’s office also concluded that Zoetop’s public statements about the data breach were misleading. At one time, the company falsely stated that only 6.42 million consumers were affected and that it was in the process of notifying all affected users.
With the realisation of the data breach, the company faced a PR problem, and Shein claims that it has since significantly increased its security measures. “We fully co-operated with the New York Attorney General and are pleased to have resolved this matter. Protecting our customers’ data and maintaining their trust is a top priority, especially when it comes to the ongoing cyber threats to businesses around the world. Since the data breach in 2018, we have taken important steps to further strengthen our cybersecurity posture and we remain vigilant.”
Much has changed since 2018. Shein has grown from a then-rising online fast fashion retailer to an all-encompassing e-commerce platform that threatens Amazon. In the second quarter of 2022, US downloads of the app surpassed Amazon for the first time. The data breach may be from years past, but it’s worth noting that Shein has been in business since 2008, so four years would be considered fairly recent in the firm’s history of existence.
Meta x App Forgery
Meta has publicised an internal security report that found that apps designed to steal Facebook login credentials are prevalent in both of the two major app stores.
Meta reported that there are more than 400 malicious apps of this nature on Android and iOS, which use simple but professional-looking artwork to give themselves a legitimate appearance, while posting fake positive reviews to suppress negative comments when users realise that the promised functionality is not being delivered.
What gave away the truth about the apps designed to steal Facebook login credentials was that they all placed a Facebook button on their start screens, prompting the victim to enter their credentials to use the app.
Because these apps do not take the approach of installing malware or keyloggers, but instead simply request Facebook login credentials as a condition of launching the app, they appear to fly under the radar of Google and Apple’s security controls, allowing threat actors to steal users’ information. While it is not uncommon for mobile apps to have some form of embedded Facebook functionality, it is unusual to require the user to provide credentials before the app is launched.
Meta said it reported its findings directly to Apple and Google, reached out to potentially affected Facebook users, and the apps were removed before the report was published.
There is no estimate of how many users’ login credentials may have been compromised by these malicious apps.
The apps appear to have targeted users who only log into Facebook with a simple username and password, not users who have activated Two Factor Authentication (“2FA”). Of course, even if users have secured their accounts with 2FA, there is nothing stopping attackers from trying various other services to see if credentials have been reused.
While it is clear that the Facebook login theft attempts spanned different categories of applications and were well organised, the most prevalent attempts were in basic photo editor applications, which often offer some attention-grabbing functionality, such as turning user photos into caricatures or adding clothing over selfies, with fake photo editors accounting for over 42% of all malicious applications detected.
Other main categories include telephony tools such as VoIP (Voice Over Internet Protocol) calling, an IP-based protocol that allows you to make voice calls over the internet, video games, and fake VPNs and commercial software (often promising access to functions and information insights that other similar free applications do not offer).
In addition, applications for horoscopes, self-help support, media players, and wallpaper collections were rarely found in Meta’s findings.
Meta highlights the need to be sceptical of apps that initially ask for Facebook login credentials, recommending that users enable 2FA on their accounts as an additional protection, and to look for signs of malicious activity that is not actually included or does not actually work, as these malicious apps provide little of the promised functionality.
Cybercriminals are showing a renewed interest in all major social media platforms and view account takeover as a relatively easy and low-risk form of cybercrime. While the conventional wisdom is that these accounts are of little value unless they belong to a celebrity or have a major platform, hackers have found creative ways to compromise large accounts.
While there are many different practices for stealing social media login credentials, one method that has grown in popularity recently is the use of apps and their contact lists to defraud legitimate advertising programmes.
In a recent scam on Facebook, attackers compromised an account and then attempted to redirect the user’s entire contact list to a URL that displayed legitimate adverts that the criminals were monetising.
The number of malicious apps that hijack user devices with similar types of ad fraud has been increasing significantly in app stores since 2020. Cybercriminals use malware to forward social media accounts to close friends and other followers of users whose social media accounts they have stolen, or to commit cryptocurrency fraud.
Clearview AI x Data Breach
The French Data Protection Authority (Commission Nationale Informatique & Libertés, “CNIL”) fined US-based Clearview AI (“Clearview” or “the Company”), which provides facial recognition technology to various organisations, especially public institutions, and has a very large database thanks to its artificial intelligence system based on facial recognition, 20 million euros for collecting people’s data without a legal basis.
Clearview has the ability to collect all photos that can be directly accessed from many websites, including social media channels (which can be viewed without logging into an account), as well as extracting images from videos available online on all platforms, and is said to have collected 20 billion images worldwide.
With this image collection, the Company markets access to a search engine’s image database, which can be searched using a person’s photo, and offers this service to law enforcement agencies to identify perpetrators or victims of crime. It is known that US law enforcement agencies have been using this technology since 2019, especially in the context of crimes involving children, such as child pornography, which have gained visibility on the internet.
Facial recognition technology is used by the company to find a person based on their photo by scanning the search engine. To do this, the Company creates a “biometric template”, in other words, a digital representation of the person’s facial features. This biometric data is very sensitive, especially because it is linked to physical identity and allows people to be uniquely identified, and it is possible to say that the vast majority of people whose images are collected to be scanned in the search engine are unaware of this feature.
What Happened?
In May 2020, CNIL launched an investigation into Clearview after receiving complaints about the Company’s facial recognition software, and in May 2021, Privacy International, an association responsible for monitoring privacy violations by governments and businesses, similarly warned CNIL about Clearview’s practices.
During this procedure, CNIL cooperated with Europe to share the results of the investigations, as the Company has no European organisation and each authority is competent in its territory, and the investigations conducted by CNIL revealed several violations by the Company against the French Data Protection Regulation (Règlement général sur la protection des données, “RGPD”):
Unlawful processing of personal data as the collection and use of biometric data was carried out without a legal basis (breach of Article 6 of the GDPR),
Failure to take effective and satisfactory account of the rights of individuals, in particular requests for access to personal data (breach of Articles 12, 15 and 17 GDPR).
On 26 November 2021, Clearview was instructed by the President of the CNIL in an official notification to cease the collection and use of personal data of individuals located on French territory without a legal basis and to comply with erasure requests, enabling individuals to effectively exercise their rights.
The Company had two months to fulfil the instructions set out in the official notice and to justify them to the CNIL, but no response was received from the Company to this official notice. The President of the CNIL referred the matter to the Restricted Committee in charge of sanctions, which decided to impose the maximum fine of 20 million euros set out in Article 83 of the GDPR.
Taking into account the fact that the data processing activities carried out by the Company pose very serious risks to the fundamental rights of data subjects, the Committee reiterated its order to the Company to cease collecting and processing the data of individuals residing in France without a legal basis and to delete their data within a period of two months, adding 100,000 euros to the fine for each day exceeding two months.
In order for personal data to be considered lawfully processed, it must be based on one of the legal grounds set out in Article 6 of the GDPR. The Company’s facial recognition software, which is completely contrary to this rule, is therefore clearly unlawful. Moreover, the Company does not have the consent of the persons whose images it has collected in order to supply its software.
It is emphasised that the Company has no legitimate interest in collecting and using this data, in particular given the intrusive and intensive nature of the process that makes it possible to obtain the images available on the internet of millions of internet users in France. It is, of course, inconceivable that these people, whose photographs or videos are accessible on various websites, including social media, would consider it reasonable for their images to be processed by the Company into a facial recognition system that could be used by states for law enforcement purposes.
Another issue raised in the complaints received by CNIL is the difficulties experienced by the data subjects when trying to exercise their rights. In addition to limiting the data subject’s right of access to data collected during the twelve months preceding the request and twice a year without justification, the company has started to respond only to specific requests following a high number of requests from the same person. In addition to these difficulties, it is also known to leave applications for expungement requests unanswered or to provide only a partial response.
This silence by the Company, which refused to co-operate with the CNIL throughout the procedure, was found to be in breach of the obligation to co-operate and therefore also in breach of Article 31 of the GDPR. Whether the Company will break its silence after the sanction decisions is a question mark for now.
9/2022 Directives
As it is known, when a data breach occurs, the obligations of data controllers and data processors to make a data breach notification also come to the fore.
Although the manner of notification to the superior authority in the event of a data breach, in which cases the notification should be made and the data subjects affected by the breach should be contacted is an area where diversity can be found in the internal practices of the Member States, it would not be wrong to say that these practices have started to be gathered under one roof with the provisions regulated in the GDPR.
The GDPR contains provisions on when and to which authority a breach should be notified and what information should be included as part of the notification, and in this context, it imposes a responsibility on data controllers to take the necessary measures by acting quickly in the event of a breach and to include the notification to the supervisory authorities in the breach response plan.
Similarly, Working Party 29 (Working Party 29, “WP29”), in its Opinion No. 03/2014 on the notification of personal data breaches, provides guidance to data controllers in deciding whether to notify data subjects in the event of a breach notification.
Within the framework of all these explanations, Guidelines 9/2022 on Personal Data Breach Notification under GDPR, which was adopted as of 10 October 2022, aims to shed light on the practice by shaping the steps that data controllers can take in order to fulfil the requirements in this direction with various examples, as well as clarifying the data breach notifications and data subject communications that data controllers will be obliged to make within the scope of all these regulations.