What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps up with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC Legal Law Firm, we closely follow the world agenda and present a selection of the current developments for your information with this content.

Government x Chinese Internet Giants

China has the largest internet user base in the world and a huge market for e-commerce, gaming and smartphones, so technology companies operating in these areas have grown exponentially in recent years.

Chinese internet giants such as Alibaba, Tiktok owner ByteDance and Tencent have shared details of the algorithm data, which is critical to the growth of social media platforms by regulating the content users see and the order in which they see it, with China’s regulators. American Meta and Alphabet, on the other hand, claim that this data is a trade secret.

The Cyberspace Administration of China published a list of 30 algorithms and stated that the list of algorithms will be routinely updated to prevent data misuse. Among the listed algorithms is an algorithm belonging to Taobao, the e-commerce site owned by Alibaba. It was announced that Taobao’s algorithm “recommends products or services to users through digital footprints and historical search data”.

ByteDance’s algorithm for Douyin, the Chinese version of TikTok, measures users’ interests through what they click, comment, “like” or “dislike”.

Since it contains a lot of business secrets, it is among the important opinions that a much more comprehensive transfer is made than what is shared and these are not made public.

Twitch x Hacker

Twitch, an Amazon-owned video game streaming platform, has admitted to a significant data breach. Twitch announced that a hacker breached its service servers. Twitch admitted that “due to a bug in a Twitch server configuration update accessed by a malicious third party”, access was inadvertently gained with information publicised on the internet.

Company representatives assured customers that “full credit card numbers were not disclosed” and “there is no indication that login credentials were compromised.”

The data breach was posted on a 4chan message board and labelled as “Chapter One”, suggesting that further attacks were imminent.

The breach appears to have occurred in Twitch’s internal information rather than sharing code affecting specific user accounts. The managing director of Check Point Software Technologies and South Asia Regional Cooperation Association in India said that any leak of source code is dangerous and can lead to serious consequences. Acronis’ vice president of cyber protection research described it as one of the most significant data breaches ever, as the exposed information could expose almost the entire digital footprint of Twitch.

Google x Abortion

More than 650 Google employees have signed a petition asking the Company to take additional steps to protect the productive rights of workers and the public. The petition was sent to senior executives at Google, including CEO Sundar Pichai, on 15 August, according to the Alphabet Workers Union, which represents employees of Google’s parent company. Executives have not yet responded to the petition.

Shortly after a draft opinion leaked in May indicating that the Supreme Court planned to overturn Roe v. Wade, Google announced that it would delete visits to abortion clinics from users’ location history “immediately after the visit” and oppose “overly broad” requests from law enforcement.

However, employees are demanding that the company take further measures to protect users’ data. Among the demands in the petition is that Google “immediately establish user data privacy controls for all health-related activities, such as ensuring that searches for abortion access are never recorded, turned over to law enforcement or treated as a criminal offence”.

Google is also asked to correct misleading search results for abortion providers. A recent Bloomberg investigation found that searches for abortion clinics on Google Maps often yielded results for so-called crisis pregnancy centres, which often try to dissuade patients from abortion.

Google x Child Images

A father taking nude photos of his child to show to a doctor was labelled as criminal activity by Google. Two days after the father took photos of his son, he received a notification on his phone and his account was deactivated for “harmful content that seriously violates Google’s policies and may be illegal”. Nearly 10 years of personal e-mail, communication and photo archives were locked.

According to the news, Mark, who realised that something was wrong with his toddler, wanted to show the doctor a photo of his son’s swollen penis. In this incident, which took place in February 2021, the family wanted to send photos for preliminary consultation instead of going directly to the doctor due to the pandemic. Moreover, this request came from the doctor himself. Mark’s wife used her phone to text several close-up photos of her son’s groin area to her iPhone to upload to the healthcare provider’s messaging system.

This activity made Mark part of a special police investigation targeting organisations dealing in child pornography. The unfortunate father was caught in an algorithmic network designed to trap people exchanging child sexual abuse material.

Images of children being exploited or sexually abused are detected millions of times each year by tech giants. In 2021, Google alone reported more than 600,000 reports of child abuse material and deactivated the accounts of more than 270,000 users as a result.

The tech industry’s first tool to severely disrupt the online exchange of such activity was PhotoDNA, a database of images of known abuse that are converted into unique digital codes or hashes. Even if a photo has been altered in minor ways, it can be used to quickly scan a large number of images to detect a match. After Microsoft launched PhotoDNA in 2009, Facebook and other technology companies used it to eliminate users circulating illegal and harmful images.

A bigger breakthrough came in 2018 when Google developed an AI tool that can recognise never-before-seen images of child abuse. This meant finding not only known images of abused children, but images of unknown victims that could potentially be recovered by the authorities. Google has made its technology available to other companies, including Facebook.

Mark asked Google for a ‘reconsideration’ but Google refused without explanation. With his emails and phones blocked, Mark could not be contacted by the police for enquiry, further complicating the matter. The police investigation was quickly resolved after the issue was recognised, but Mark did not get his Google account back.

GDPR x United Kingdom

A petition has been launched calling on the government to abandon its plans to reform the UK data protection regime. The petition emphasises that “plans to reduce burdens on businesses are incompatible with the protection of individual rights and may facilitate the misuse of personal data”.

It was argued that the UK’s data protection regime is no longer harmonised with that of the European Union (“EU”), which could create additional costs for businesses operating internationally, and that some businesses may even relocate as a result of the reforms.

The campaign stated that as a result of the reforms, the UK’s data protection regime may be deemed inadequate under the EU and it may become difficult for UK businesses to operate in the EU.

Google x Googerteller

An app developer and privacy expert created a demo app that “beeps” whenever computers send data to Google. It is already possible to say that the app “makes a lot of noise”.

It is well known that Google monitors how users use its search engine and other applications to provide a better experience within its ecosystem. However, given the prevalence of Google Analytics and Google’s ad networks, it is thought that there may be more to Google’s tracking than meets the eye, and in order to better understand what kind of data is being sent to Google, where and when it is being recorded, Bert Hubert, known as the original creator of PowerDNS, has developed a new application called “Googerteller”.

Googerteller works by using a list of IP addresses associated with many Google services, except those connected to Google Cloud, which are provided free of charge by Google. When using a programme or browsing the internet, a “beep” sounds whenever the computer connects to one of these IP addresses.

Hubert also showed how the application is used in practice with a demo video he shared on his Twitter account. In the video, it was seen that as soon as the domain of the Dutch government’s website was started to be typed, a warning started to come from the system.

The fact that the warning sound is heard at almost every click on the page, including opening and closing menus on the website, shows how serious the data sent to Google is.

Although it is claimed that some of these connections with Google may be due to Chrome’s tight integration with Google services, the fact that almost identical results are obtained in Firefox, the search engine developed by Mozilla, weakens the argument considerably.

Googerteller is currently only designed to run on Linux-based operating systems (Debian, Ubuntu, Arch, Fedora, etc.), but users are already sharing clever ways to run the application on Mac and other similar systems, or improved cross-platform versions.

Twitter x Zatko

Twitter’s former head of security has accused the company of serious vulnerabilities in its handling of user information and spam bots.

Peiter Zatko, a veteran security chief, was hired by Twitter co-founder and then-CEO Jack Dorsey in 2020 to strengthen the company’s security following a mass attack targeting 130 high-profile Twitter accounts. In his complaint to the Securities and Exchange Commission, the Department of Justice and the Federal Trade Commission, Zatko also alleged that Twitter intentionally deceived Twitter executives, users, board members and the federal government about the strength of Twitter’s security measures.

The complaint alleges that Twitter violated a 2011 agreement with the Federal Trade Commission in which Twitter said it would create a comprehensive security plan to protect users’ personal information. While a presentation to the board’s risk committee late last year claimed that 92 per cent of employee computers had security software installed, Zatko says executives avoided mentioning that a third of company computers were still vulnerable.

While Twitter denied Zatko’s allegations in a statement to CNN, emphasising that security and privacy have always been a priority for the company, it is noteworthy that Zatko was dismissed by Twitter’s current CEO Parag Agrawal immediately after he reported that the risk committee meeting may have been shady.

What happened?

Twitter has come under fire in recent months for its handling of sensitive user information. Earlier in August, a former Twitter employee was found guilty of spying on Saudi dissidents and passing their information to the Saudi government. The company was also fined $150 million by the US federal government for using personal data for marketing purposes, despite requesting user email addresses and phone numbers for security purposes.

noyb.eu x Google

noyb.eu filed a complaint against Google with the French Data Protection Authority (“CNIL”). The complaint alleges that Google, which refuses to implement the European Court of Justice’s (“CJEU”) ruling on automated e-mails sent for marketing purposes, exposes users to marketing/advertising through Gmail without their consent.

In the complaint, it is stated that Google automatically sends “unsolicited” e-mails/spam messages containing advertisements to the inboxes of users via Gmail, and in this context, advertising activities disguised as e-mails are carried out, and it is emphasised that the ePrivacy Directive will find an application area due to the evaluation of these sendings as marketing/advertising activities within the scope of commercial electronic messages.

Within the framework of European Union law, it is quite clear that commercial electronic messaging activities must be carried out with the consent of the users. In addition, it is also stated by the Court of Justice that all kinds of advertising messages transmitted to users will be subject to the consent rule. Since the complaint against Google this time is not based on the GDPR, but on the ePrivacy Directive, it is on the agenda that the CNIL may rule and directly fine Google without relying on any other data protection legislation.

What happened?

It is possible to say that CNIL has a history with Google. It is known that Google was fined €50 million by the CNIL for unclear privacy notices and the lack of legal grounds on which they relied in their personalised advertising activities, and €150 million in December 2021 for cookie violations.

California x Sephora

Retail cosmetics giant Sephora was fined $1.2 million for selling consumers’ personal information and failing to process requests to disable cookies in violation of the California Consumer Privacy Act.

At a press conference, California Attorney General Rob Bonta emphasised that “Sephora’s actions were unacceptable, and that providing personal data to online third-party trackers in exchange for benefits such as targeted advertising and discount analyses cannot be disclosed to consumers”.

The company spokesperson stated that Sephora uses cookies for “Sephora experiences” and “personalised delivery of Sephora product recommendations”. Consumers can currently opt out of this data via the “Don’t Sell My Personal Information” link on the Sephora website, the statement said.

Bonta stated that Sephora’s failure to inform consumers that their personal data was being processed and even sold, and their failure to take action to end the breach despite notifications and warnings to them, brought the situation to a serious dimension.

Data Monitoring Point x Big Five

A new study claims that of the Big Five major technology companies – Google, Twitter, Amazon, Facebook and Apple (the “Big Five”) – Google tracks more private data about users than any other, and Apple tracks the least.

Apple recently rolled out App Tracking Transparency specifically to protect users’ privacy from other companies. However, a new report suggests that Apple is avoiding more monitoring than is necessary to run its services.

According to StockApps.com, “Apple is the most privacy-conscious company in the industry and only stores the information necessary to protect users’ accounts. This is because its websites are not as dependent on advertising revenue as Google, Twitter and Facebook.”

Of the Big Five, Google reportedly tracks 39 separate data points per user, while Apple tracks only 12. Unexpectedly, Facebook tracks only 14 data points, while Amazon tracks 23 and Twitter tracks 24. Although the report does not list what the data points are, it is stated that they include location details, search history, third-party site activity and, in the case of Google, emails in Gmail.

Edith Reads of StockApps.com said: “Most people don’t have the time or patience to read the privacy policies for every website they visit, which can be several pages long. As a result, by agreeing to the terms of the privacy policy, users are allowing Google to collect all the data they need.”

Facebook x Cambridge Analytica

Tech giant Facebook has agreed in principle to settle a lawsuit for damages for allowing Cambridge Analytica to access the private data of tens of millions of users, four years after the Observer exposed the scandal that has plunged it into enduring controversy. A court filing reveals that Facebook’s parent company Meta has agreed in principle for an undisclosed sum in a long-running lawsuit alleging that Facebook illegally shared user data with the UK analytics firm.

The case, which forced CEO Mark Zuckerberg to testify before Congress and led to the social media firm being fined billions of pounds and saw its share price plummet by more than a hundred billion dollars, stems from the disclosure of mass data misuse made by a Cambridge Analytica whistleblower to the Observer in 2018. It is considered among the noteworthy details that the date when Mark Zuckerberg, who tried to cover up the Cambridge Analytica Scandal, approached the agreement coincided just before he was cross-examined for six hours under oath.

There is a separate lawsuit alleging that Facebook paid $4.9 billion more than it should have to the US Federal Trade Commission (“FTC”) in a settlement over the Cambridge Analytica scandal to protect Zuckerberg, and the lawsuit claimed that the size of the $5 billion settlement was due to the Facebook founder’s desire to avoid being named in the FTC complaint.

Xinai Electronics x Facial Recognition Data

While its contents may seem insignificant in China, where state surveillance is ubiquitous and facial recognition routine, the size of the exposed database is staggering. The database held more than 800 million records, making it one of the largest known data security breaches of the year in terms of scale, after a massive data leak of 1 billion records from the Shanghai police database in June.

The exposed data was identified as belonging to a technology company called Xinai Electronics, based in Hangzhou, China. The company controls the access of people and vehicles to workplaces, schools, construction sites and car parks across China by installing systems. Its website highlights the use of facial recognition systems for a range of purposes beyond personnel management and/or building access, while its cloud-based vehicle number plate recognition system enables drivers to pay to park in unattended garages.

Security researcher Anurag Sen found the company’s exposed database on a server hosted by Alibaba in China and enlisted the help of TechCrunch to report the Xinai security breach. He said the database contained an alarming amount of information that is growing rapidly every day and hundreds of millions of records of image files hosted on various domains belonging to Xinai. Neither the database nor the hosted image files were protected by passwords and were accessible from a web browser by anyone who knew where to look. As of mid-August, the database is no longer accessible.

China uses facial recognition technology to monitor large populations in smart cities, but it also uses it for mass surveillance of minority populations that Beijing has long accused of repression.

Last year, China promulgated the Personal Information Protection Law, the first comprehensive data protection law seen as Europe’s equivalent of GDPR privacy rules, which aims to limit the amount of data companies collect but generally exempts the police and government agencies that make up China. But with two mass data breaches in recent months, both the Chinese government and tech companies are finding themselves ill-equipped to protect the vast amounts of data collected by surveillance systems.

Snapchat x Biometric Data

Snapchat’s parent company Snap has agreed to a $35 million settlement to resolve a class action lawsuit alleging that it collected and stored users’ unique biometric data without authorisation. Plaintiffs said Snap failed to obtain the written consent required by the Illinois Biometric Information Privacy Act (“BIPA”) before collecting and storing facial recognition data and other biometric information.

In 2020, Facebook agreed to a $550 million settlement after being sued for allegedly collecting biometric data to tag photos and failing to comply with BIPA; in June, Google agreed to pay $100 million to settle a lawsuit alleging that its facial recognition programme in Google Photos violated the regulation. Just this week, final approval was given for a $92 million payment to TikTok for infringement.

The company denied the allegations and said that the “limited data” used by Snapchat filters remained on a user’s phone and was not stored in a centralised data bank. Snapchat filters do not collect biometric data that can be used to identify a specific person or that can be used to make facial identification, and that filters do not do more than detect that the nose and eyes on a face are nose and eyes, and therefore should not be considered biometric data.

Tiktok x Data Breach Allegation

Some cybersecurity analysts posted on Twitter that they had detected an unsecured server breach that allowed access to TikTok’s storage, which they believed contained personal user data. A few days earlier, Microsoft officials said they had identified a “high-severity vulnerability” in TikTok’s Android app that would “allow attackers to compromise users’ accounts with a single click”.

TikTok officials, on the other hand, stated that the breach allegations are not true and said, “Our security team investigated these allegations and found that the code in question is completely unrelated to TikTok’s backend source code.”

The vulnerability identified by Microsoft was a narrower problem that could affect Android phones. Dimitrios Valsamaras of the Microsoft 365 Defender Research Team wrote that attackers “may have been allowed to access and modify TikTok profiles and sensitive information, such as posting private videos, sending messages, and uploading videos on behalf of users.”

A TikTok spokesperson said the company responded quickly to Microsoft’s findings and fixed the vulnerability found in “some older versions of the android app”.

In July, TikTok was said to have detected “excessive data collection” activity on user devices, with the app checking device location at least once an hour and having code that collected both device and SIM card serial numbers. TikTok denied the findings and said the amount of data was misstated.

Instagram x Child Privacy

The Irish Data Protection Authority has fined Meta-owned social media platform Instagram €405 million for breaching the GDPR. This is the second highest fine under the GDPR after the €746 million fine imposed on Amazon, and the third fine imposed by the Irish Data Protection Authority on Meta.

After fines of €225 million for WhatsApp and €17 million for Facebook, this is currently the largest fine for a Meta-owned company for breaches of children’s privacy, including Instagram’s publication of children’s email addresses and phone numbers.

A Meta spokesperson said: “This investigation relates to old settings we updated over a year ago, and since then we’ve released many new features to help keep kids safe and their information private. When anyone under 18 joins Instagram, their accounts are private, only people they know can see what they post, and adults can’t send messages to young people who don’t follow them. We have co-operated with the Irish Data Protection Authority throughout the investigation and are carefully reviewing their final decision.”

Sydney x Fingerprinting

A Sydney high school has decided to install fingerprint readers at the entrance to toilets and sinks to monitor student movement and prevent vandalism, a decision criticised by many data protection and privacy experts as unjustified and disproportionate.

The system was originally introduced after nearly two years of consultation with the high school’s local PTA-style advisory board. However, the advisory board concluded that instead of fingerprint processing, an alternative was to use an alphanumericised version of the fingerprint. Although the processing of biometric data by converting it into characteristic codes or numbers is also not a method that legitimises data processing under the GDPR, it seems that the school advisory board made a decision to minimise the relative risk.

Digital Rights Watch Programme leader Samantha Floreani warned that the risks posed by the system far outweigh the potential benefits, stating that preventing vandalism is not a good enough justification for privacy violations, and commented that processing biometric data so that students can use toilets and sinks would be disproportionate.

UBER x Data Breach

The trial of former Uber security chief Joe Sullivan and Uber may be the first trial of an executive officer facing criminal charges in connection with a data breach.

The US District Court in San Francisco will begin hearing arguments on whether Sullivan, the ride-sharing giant’s former chief security officer, failed to notify authorities of a 2016 data breach that affected 57 million of its drivers worldwide. The breach first came to light in November 2017, when Uber CEO Dara Khosrowshahi revealed that hackers had gained access to the driver’s licence numbers of 600,000 US Uber drivers and the names, email addresses and phone numbers of 57 million Uber drivers.

Public disclosures such as Khosrowshahi’s are required by law in many US states, with most regulations requiring disclosure “at the earliest practicable time and without unreasonable delay”. Khosrowshahi’s notice, however, came exactly one year after the breach occurred.

In 2018, Uber paid $148 million for failing to report the data breach in a nationwide settlement with 50 state attorneys general. In 2019, two hackers were found guilty of hacking Uber and then extorting Uber’s “bug bounty” security research programme, and in 2020 the Justice Department filed criminal charges against Sullivan.

Federal prosecutors alleged that in an attempt to cover up the security breach, Sullivan “instructed his team to keep information about the 2016 Breach under tight control” and treated the incident as part of the bug bounty programme.

The Justice Department complaint alleged that only Sullivan and former Uber CEO Travis Kalanick had knowledge of the full scope of the breach and had a role in the decision to treat it as an authorised disclosure through the bug bounty programme. But the security industry is divided over whether Sullivan alone deserves to be held responsible for the breach. Some question whether the role of other company executives and the board of directors should also be investigated, while others state that Sullivan’s responsibility is clear.

Facebook x Sign In

Other major brands such as Dell, Best Buy, Ford Motor, Pottery Barn, Nike, Patagonia, Match and Amazon’s video streaming service Twitch have removed the ability to sign in with Facebook. Jen Felch, Dell’s chief digital and information technology officer, said people have stopped signing in through their social media identity for reasons that include concerns about security, privacy and data sharing.

In fact, it is a very practical way for people to log in directly through their social media accounts without creating a username, password or filling in other personal information, but since the past years, especially the profiling of 87 million users for the 2016 presidential elections, which emerged with Facebook-Cambridge Analytica, the subsequent misinformation about various issues related to masks or vaccines within the scope of the COVID-19 Pandemic, and all other violations caused by Facebook intentionally or negligently, have been interpreted as a “violation of personal space” by individuals and pushed individuals to act cautiously. This was also the main reason for the above-mentioned companies to remove Facebook login features.