What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps pace with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC LEGAL, we closely follow the world agenda and present a selection of the current news for your information with this content.

The news below belongs to February 2024.

DPA x Uber

The Dutch Data Protection Authority (DPA) fined Uber €10 million for breaching privacy regulations regarding the personal data of its drivers.

The DPA found that Uber did not specify in its terms and conditions how long it kept its drivers’ personal data or how it secured this data when sending it to organisations in unnamed countries outside the European Economic Area (EEA).

Uber also obstructed its drivers’ efforts to exercise their privacy rights by unnecessarily complicating requests for access to personal data, the DPA added, noting that Uber is nonetheless taking steps to correct the issues highlighted.

“While the DPA acknowledged that Uber has addressed a small number of ‘low-impact’ issues raised by drivers, it dismissed the vast majority of their allegations as unfounded,” an Uber spokesperson said in a statement. The spokesperson added that the company is continuously working to improve its data request processes.

The fine was imposed after more than 170 French drivers complained to a French human rights organisation, which in turn complained to the French Data Protection Authority. However, as Uber’s European headquarters are in the Netherlands, the complaint was forwarded to the DPA.

GRC Legal Comment

The GDPR aims to protect the level of protection of personal data within the EEA by regulating the transfer of personal data to third countries. In this context, it has introduced criteria for the transfer of personal data abroad. Persons whose personal data are transferred to third countries must specify which safeguards are provided for the transfer of their data in accordance with the provisions of the GDPR. At this point, this sanction of the DPA emphasises the sensitivity of the GDPR provisions on data transfer to third countries.

French SA x Tagamedia

As part of its priority investigation topic on commercial research in 2022, the French Supervisory Authority (the “French SA”) focused on the practices of professionals in the sector, in particular those who resell data, including many intermediaries in this ecosystem known as data brokers.

On this occasion, the French SA decided to launch an investigation into Tagadamedia, which mainly operates online competition sites and product testing websites, thereby collecting data from potential customers.

Tagadamedia collects data from potential customers through forms offered on its websites to participate in competitions or product tests. This data is then sent to the company’s partners for commercial research. Although the company claims to collect consent for data processing, the forms used do not allow consent to be collected in accordance with GDPR requirements.

During the examinations, the company provided the French SA with two examples of forms for collecting data from potential customers. However, the presentation of these forms did not allow obtaining freely given and informed explicit consent. In fact, the fact that the button allowing users to give their consent was located in a more emphasising way, in contrast to the button allowing users not to give their consent, or was presented as missing text and reduced size, strongly incentivised users to agree to the transfer of their data to partners.

The company presented a new form to the French SA during the enforcement procedure. The consent obtained with this new form also did not authorise the obtaining of a valid consent, thus depriving the processing activity of any legal basis.

On this occasion, French SA identified two breaches of the GDPR:

1- Non-compliance with the obligation to have a legal basis for the processing of data (Article 6 GDPR)
2- Failure to comply with an obligation to keep a record of processing activities (Article 30 GDPR)

French SA fines Tagadamedia €75,000 for infringement.

GRC Legal Comment

Data protection authorities around the world are focusing on the concept of explicit consent and conducting stringent audits to ensure that data brokers are on track to obtain appropriate consent. The fine imposed on Tagadamedia by the French SA is a reflection of the audits.

When the forms that caused the company to be penalised are examined, the concept of dark design appears. Dark design, in the simplest terms, is design tricks. These fraudulent interfaces, which are used to manipulate users by focusing their attention in different places on websites and applications, have become one of the most common practices that companies operate in favour of their own interests and benefits. The fact that the approval process carried out by Tagadamedia is based on shadowy design activities has crippled the free will of individuals and led to an unlawful obtaining of explicit consent. For more detailed information, you can access our study on the subject, Dark Design, on our LinkedIn account.

Dutch Government x META

The Dutch government is considering a complete withdrawal from Facebook due to serious concerns about how the social media platform handles data security. Talks with the parent company Meta on this issue have not yielded the desired developments. Sources close to the government told De Telegraaf that an official statement outlines preparations for a Facebook ban affecting the entire government.

State Secretary for Digitalisation Alexandra Van Huffelen confirmed to the newspaper that the government is concerned about how Facebook has been handling privacy-sensitive data for years. “In 2017, the Dutch Data Protection Authority (“AP”) found that Facebook had breached the GDPR in two areas: informing users and processing sensitive data,” Van Huffelen said. He added that Meta subsequently made adjustments but failed to resolve this issue and new problems arose.

Van Huffelen told the newspaper that due to concerns raised by this and other countries, “the Dutch government conducted an investigation into the privacy risks associated with the use of Facebook Pages, the so-called ‘DPIA'”. The investigation revealed serious shortcomings.

The Secretary of State has discussed these with Facebook, but this has not yet resulted in satisfactory commitments or improvements. Meta even objected to the flaws found in the DPIA. In November, the government therefore asked the EP for advice on whether it is safe to continue using Facebook.

This advice is expected to come soon. However, according to Telegraaf’s sources, the government expects to ban Facebook and is already making preparations. The ministries are considering what consequences withdrawing from Facebook would have for them.

According to the newspaper, other social media platforms could follow suit. Van Huffelen recently decided to stop using X, formerly Twitter. He emphasised that this was a personal decision and not government policy. But one of the reasons he gave was that X’s improvements were not open for discussion and implementation.

GRC LEGAL Comment

The Dutch government, which banned TikTok due to concerns about the espionage risks posed by the Chinese platform, is now on the agenda with the possibility of completely abandoning Facebook due to data security concerns.

The fact that Meta, which was previously sentenced to a record fine of 1.3 billion dollars by the Irish Data Protection Commission and which we can characterise as ‘criminal’ in the field of personal data security, continues to violate GDPR justifies the security concerns of the Dutch government. Considering Meta’s understanding of privacy in this respect, we hope that the Dutch government’s decision to abandon Facebook will set an example for other countries.

Consumer Defenders x Toyota

“Choice”, a consumer advocacy group, says Toyota cars collect and potentially share location data, driving data, fuel levels and even personal information such as phone numbers and email addresses. Choice said the ‘Connected Services’ feature can send personal and vehicle data to third parties, and told drivers that removing components risks voiding the warranty.

Toyota emphasised that it takes customer privacy “extremely seriously”, but acknowledged that the data communications module (“DCM”), known as the “Connected Services” feature, can only be disabled, but not removed from its vehicles, otherwise it could void drivers’ warranties and render Bluetooth and speakers non-functional.

“Car companies say these tech features improve driver safety, but in a world of data hacking and sharing, it’s just another way for companies to collect valuable information, whether consumers like it or not,” said Rafi Alam, senior campaigns and policy advisor at Choice.

“Alarmingly, Toyota’s Connected Services policy says that if you don’t opt out, it will collect and use personal and vehicle data for research, product development and data analysis purposes,” he said. He also added that Toyota’s policies are incredibly vague about what actually counts as ‘consent’.

According to an investigation by Choice, a customer named Matthew claimed that a few months after purchasing his $68,000 Toyota HiLux, he became aware of the Connected Services feature and began receiving emails asking him to sign up for it.

Irritated by the feature, the customer asked the dealership to remove the technology from his car (not just disable it), but claimed he was told that this would void the warranty and put his insurance at risk. As a result, he did not take delivery of the car and cancelled his order, but claimed that the dealership refused to refund his $2,000 deposit.

Alam said privacy issues have become a widespread concern in automobiles, “almost every new vehicle seems to have a ‘smart’ connection installed.” He called on the federal government to strengthen security measures and urgently introduce bans on the collection and use of personal data. “People should not have to give up their right to privacy to buy a new car,” Alam said.

GRC LEGAL Review

Choice’s findings highlight serious problems such as Toyota’s ambiguous privacy policy, the fact that the Connected Services feature cannot be completely removed from the vehicles, although it can be disabled, and that this can void the warranty and render Bluetooth and speakerphone dysfunctional. As a matter of fact, the fact that the Connected Services feature cannot be completely removed from the vehicle and other features become dysfunctional can be said to condition the service provided by Toyota on the collection of consumers’ personal data.

Given Toyota’s sensitivity to consumers’ privacy rights, there is growing concern that the Connected Services feature, which is ‘rumoured’ to have been disabled, does not actually collect data.

France x Cyber Attackers

The data of almost half of France’s citizens has been compromised in a major security breach at healthcare payment service providers, according to the French Data Privacy Watchdog.

The French Data Protection Authority (National Commission on Informatics and Liberty, “CNIL”) announced at the end of January that the systems of the payment organisations Viamedis and Almerys had been breached and the data of more than 33 million customers had been stolen. The affected data included dates of birth, marital status, social security numbers and insurance information, but no banking information, medical data or contact details were compromised.

Viamedis said the data was compromised in a phishing attack targeting healthcare professionals and that the stolen identity data was used to gain access to the system, while Almerys did not explain how the data breach was carried out. However, it is speculated that the breaches were carried out in a similar manner through a portal used by healthcare providers.

CNIL stated that it is working with Viamedis and Almerys to ensure that those affected by the data breach are notified, as stipulated in the GDPR, but that it will take time to realise the breach notification affecting almost half of the country. It also announced that an investigation has been opened to determine whether any organisations were at fault for the breaches. French authorities continue to warn that the compromised data could be combined with data from other breaches to be used in phishing or social engineering attacks.

Information Corner

Social engineering is a type of attack that attempts to gain information or access by manipulating people, rather than carrying out a direct technical attack on computer systems or networks. Attackers often try to use people’s natural tendencies, curiosity, trust, or other emotional weaknesses to get information or get people to perform certain actions. For example, they may compromise personal information by sending a fake email, steal confidential information by making a phone call, or carry out phishing attacks by redirecting to a false website.

GRC LEGAL Review

The large-scale data breach in France once again demonstrates the importance of the personal data/information security measures that service providers must take. In fact, the breach occurred in the systems of healthcare service providers and covers highly critical and sensitive data, causing serious privacy concerns. It is vital that organisations, especially those that process such sensitive data, keep data protection measures at the highest level and strictly monitor security vulnerabilities.

Hong Kong x DeepFake

In what is believed to be the most ambitious deepfake scam ever perpetrated in Hong Kong, attackers convinced an unnamed company employee to transfer $25 million through a fake video conference featuring simulations of the CFO and other staff members. Although it is a fact that the type of fraud involving deepfake voice is becoming increasingly widespread, this scandalous incident is the first case in which fake representations of more than one person were used.

It is known that all of the employees whose false representations were included in the case had publicly available information, voices and images; it is known that the targeted employee initially suspected fraud, but ultimately made fifteen bank transfers worth a total of $ 25 million.

It is also known that the targeted employee received a phishing email from the fraudster claiming to be the CFO of the company, that the content of the email stated that a confidential transaction would be made, that the employee was persuaded to participate in a fake video group chat controlled by the fraudster upon becoming suspicious, and that the fake video group chat included a simulation of the CFO and company employees, and that the truth was revealed only a week later when the employee contacted the company’s head office about the transactions.

Making a public statement about the incident, the Hong Kong police stated that deepfake fraud activities have increased in their jurisdiction and that there were at least 20 fraud cases in 2023 in which deepfake videos were used to deceive the facial recognition system, and that no arrests have yet been made regarding the fraud case subject to the news and the investigation is ongoing.

As is well known, artificial intelligence is already being used to enhance all kinds of cybercrime and fraud. While some attackers correct the mistakes in fraudulent messages and emails that cause victims to be suspicious with artificial intelligence tools such as ChatGPT, some attackers train artificial intelligence to write malicious code. As the Hong Kong deepfake scam shows, the biggest advancement so far has been in the field of fake voice.

There are now tools that can create a convincing facsimile of a voice from just a few spoken sentences, which is particularly dangerous for public figures or company representatives. Microsoft’s new VALL-E tool can replicate a voice in as little as three seconds. These developments have led to an explosion in phone call scams, where criminals pose as family members and ask for money under the pretence of an emergency.

Facial recognition systems also appear to be particularly vulnerable to deepfake scams, calling into question the future viability of biometric facial recognition. A recent analysis by Gartner predicts that a wide range of biometric verification systems will become unreliable by 2026 due to AI-powered deepfake attacks, with 30% of organisations abandoning them due to inefficiency.

GRC LEGAL Review

It is seen that the deepfake application, the definition of which has been clarified with the information note published by the Authority in recent months, has created great repercussions worldwide, as understood from the scandal in Hong Kong. In this respect, it will be important for both individuals and companies to be aware of deepfake applications in their private lives in order to prevent large-scale data breaches.

In particular, it will be an effective step for companies to strengthen their cyber security operations and internal communication channels against deepfake attacks that may cause qualified financial losses for companies. In addition, multiple levels of approval should be applied for important transactions such as money and information transfers, and it is vital that individuals are trained not to hesitate to question suspicious requests by considering the reality of deepfake.