What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”). From the examples we have repeatedly encountered before, we witness that the Authority keeps pace with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC LEGAL, we closely follow the world agenda and present the current selections for your information with this content.

The news below belongs to 2024 March.

EU Elections x TikTok

The European Parliament’s (“EP”) plan to use TikTok to campaign for the European Union (“EU”) elections in June, despite previous cybersecurity bans, raises questions about secure implementation. This is because the EP has not elaborated on how it will conduct this in a cybersecurity-friendly manner.

In early February, Euractiv reported that the EP was preparing to use TikTok in its election campaign, despite banning the app from corporate devices last year due to cybersecurity concerns.

Valentin Weber, a research fellow at the Centre for Geopolitics, Geoeconomics and Technology of the German Council on Foreign Relations (DGAP), told Euractiv that the reason for banning the app on corporate devices was that they could also record people or photos and documents belonging to the Commission.

How can the EP use TikTok safely?

According to Weber, the first step to secure TikTok use would be to buy a new phone used only for TikTok. According to the Associated Press, this is what United States (“US”) President Joe Biden’s campaign is doing in the US: Using a TikTok-only mobile phone and implementing other measures to keep the app isolated from other communications.

“The second condition would be not to use any of EP’s infrastructure. This means that you cannot use the Wi-Fi network in the EP buildings,” explained Weber. But even an empty phone without an Assembly Wi-Fi connection can pose a risk.

“It’s not just the data the phone has, but what the phone can do. It can record audio; it can record video. So you can’t use it in closed meetings, you can’t use it on campaign trips where the candidate is saying something or discussing potentially dangerous information. If EU staff attend a confidential meeting with their private phones and TikTok installed, this would pose a similar risk.” Weber added that “the only safe thing to do is to cross-post from other platforms, from a location where confidential matters are not being discussed”. Cross-posting means sharing the same content on more than one social media platform.

Security Risks

Weber explained that TikTok security concerns are linked to fears of espionage by the Chinese government. No conclusive evidence of TikTok’s links with the Chinese government has been presented, and representatives of the platform have consistently denied such links, although some suggest that evidence of links with Bejing does exist. However, Weber said that in terms of privacy, “the use of other applications is not much better”.

Disinformation Prevention

Nevertheless, Weber believes it makes sense for politicians and organisations to join the platform to campaign and reach their audiences. “Millions of young citizens, many of whom are likely first-time voters, use TikTok to get information about issues they are interested in,” an EP spokesperson said. A recent Washington Post article on a report by New York University also pointed out that TikTok has become more influential in politics since 2020.

A TikTok spokesperson told Euractiv that they welcome political organisations on TikTok, especially in the run-up to elections, with 142 million people across the EU joining TikTok every month. “Verified accounts of politicians and organisations provide voters with another way to access their representatives and reliable sources in the common fight against misinformation,” the spokesperson added.

Earlier this month, TikTok announced that in preparation for the EU elections, they will launch an in-app local language Election Centre for each member state as part of the fight against disinformation, among other measures.

Meanwhile, the EP seems to have other cybersecurity concerns; an internal email revealed that the EP’s defence committee was subject to phone hacking, according to Politico. This follows suggestions that the EU institution’s cybersecurity is not ready for the elections and possible accompanying attacks.

GRC LEGAL Comment

Developing technology has had positive as well as negative effects on the world. It can be said that social media platforms are not as innocent as they seem. As a matter of fact, the fact that the thoughts of individuals are influenced through social media platforms has been proven by the Facebook – Cambridge Analytica scandal, which changed the outcome of the elections. Accordingly, while these platforms, which have become indispensable in our lives, have become the most effective method to reach the masses, they also drag dangers with them.

TikTok, which has millions of users, is undoubtedly an excellent tool for conducting election campaigns and addressing people. However, considering the data breaches that have been going on for years within TikTok, it raises questions whether the process will be manipulated even if strict measures are taken. Politicians using a phone on which they will only install the TikTok application and not using the infrastructure of the EP building may be simple-looking but important measures against possible violations. However, each EP member must comply with these measures with the utmost care, and it is clear that even a slight carelessness may provide a great opportunity for a data breach. In this context, it is important that the entire process is carried out in line with the principles of transparency and accountability and that EP makes a detailed statement on the subject as soon as possible.

Serco Leisure X ICO

The Information Commissioner’s Office (“ICO”) said that Serco Leisure had unlawfully processed employee biometric data using facial recognition technology to monitor more than 2,000 employees across 38 leisure sites in the UK and ordered it to cease the practice.

This practice, which the ICO said was “neither fair nor proportionate”, was done to check employee attendance. Serco Leisure said it would comply with the ICO’s enforcement action, but that it had taken legal advice before installing the cameras and that employees had not complained about them for five years. It added that this was to “make entry and exit easier and simpler” for employees, and that employees were consulted before the implementation and that the implementation was positively received.

The ICO said that the company employee, whose fingerprints were also scanned, was not offered a clear alternative to the collection of biometric data. It also found that the firm failed to demonstrate why this practice was necessary when there are less intrusive ways of monitoring employee attendance, such as ID cards or badges.

UK Information Commissioner John Edwards said Serco Leisure had increased the “imbalance of power in the workplace” and made employees feel as if they had no choice but to hand over their biometric data. “Serco Leisure did not fully assess the risks before introducing biometric technology to monitor employee attendance and placed its business interests above the privacy of its employees,” he said.

GRC LEGAL Comment

The Personal Data Protection Board, as well as many other data protection authorities, considers obtaining biometric data from employees for the purpose of tracking their entry and exit to and from work as disproportionate. As a matter of fact, biometric data is within the scope of special quality data, which may have much more severe consequences for the persons concerned as a result of their violation, and the necessity to protect such data outweighs the legitimate interest of the employer.

The collection of biometric data when there are alternative ways to track employees’ entry and exit to and from work that interfere less with fundamental rights and freedoms has also been considered unlawful by the ICO and the ICO has announced that it has published a new guide for all companies considering using their employees’ biometric data, showing how to comply with data protection law. We hope that the work carried out in this case will serve as a guideline for all data controllers.

Garante X Enel Energia

The Italian Data Protection Authority (Garante per la protezione dei dati personali, “Garante”) has fined Enel, the country’s largest utility company, more than €79 million for misusing customers’ personal data by exposing them to telemarketing activities.

In a statement, Garante alleged that Enel Energia used customers’ personal data in connection with at least 9,300 contracts to illegally promote its energy and gas services. The statement also alleged that Enel Energia obtained 978 users’ personal data from four companies outside its sales network. Garante said Enel Energia’s customer management and service activation information systems showed “serious security deficiencies”.

Enel Energia, the recipient of the largest fine imposed by Garante to date, said it would appeal the judgement, saying it had always acted correctly and had taken “all appropriate measures” to ensure the security of its systems and comply with data protection rules.

GRC LEGAL Comment

Companies/data controllers who expose their customers to telemarketing processes without their consent in order to carry out promotional and advertising processes are constantly engaged in unlawful data processing activities in order to reach more people. The imposition of administrative fines on data controllers as a deterrent is the most effective method to eliminate this violation. In this respect, the record fine imposed by Garante will set a precedent for data protection authorities and will move all countries one step forward in the context of the fundamental right and freedom to protect personal data.

Standard Contractual Clauses for Overseas Transfers in the UK Updated on 21 March 2024!

Under the UK GDPR, UK organisations wishing to transfer personal data to a recipient in a jurisdiction that has not been certified by the UK government as providing “adequate safeguards” for the protection of that data must rely on a valid transfer mechanism to effect the transfer. One of the more commonly used transfer mechanisms is the Standard Contractual Clauses approved by the UK government. (Standard Contractual Clauses “SCC”).

Currently, one of the SCCs that organisations can rely on in relation to contracts entered into before 22 September 2022 were the old EU SCCs issued by the European Commission under the old Data Protection Directive. This is no longer the case as of 21 March 2024. In this context, contracts relying on the old EU SCCs as the relevant data transfer mechanism should have been updated before that date to ensure that the international transfer of personal data is compliant with data protection legislation.

Which transfer mechanisms can be relied on instead?

Unless no other transfer mechanism can be relied upon, either (i) contracts containing the old EU SCCs will need to be updated to include the new EU SCCs and the UK Annex published by the European Commission on 4 June 2021, or (ii) the parties to the relevant contract will need to sign the UK’s International Data Transfer Agreement (“IDTA”).

At this point, it should be noted that the new EU SCCs alone are not recognised as a valid transfer mechanism for restricted transfers of personal data under the UK GDPR and the UK Annex should be added.

What steps should be taken?

Firstly, the following should be determined: Which of the existing contracts involve the transfer of personal data outside the UK (to a country not subject to an adequacy decision) and which of those contracts rely on the former EU SCCs as the relevant transfer mechanism. Once this has been finalised, there will be further points to consider, such as how to amend the contracts, but these first steps will help to understand the scope and scale of the project, which will determine the approach to be taken.

Assuming that the SCC is relied upon, it should then be determined how best to incorporate the new EU SCCs and the UK Annex into the relevant contracts, or how best to cancel the old EU SCCs in favour of the IDTA. The most likely option would be to amend the contract, but in some cases it may make sense to enter into a new contract instead.

As part of this process, it should also be ensured that an up-to-date Transfer Risk Assessment (“TRA”) is available for each restricted transfer. Completion of a TRA is a requirement for transfers to a country that is not subject to a qualification decision, including transfers based on the IDTA or the new EU SCCs and the UK Annex.

What about sub-processors?

The new SCCs apply to all restricted transfers, including a transfer from a controller to a processor or from a processor to a sub-processor. On this basis, any organisation that shares personal data outside the UK must comply with these new obligations from 21.03.2024, regardless of its controller/processor status.

Where a data controller, it should be ensured that the contract with any processor has appropriate transfer obligations to ensure that the processor has put in place an appropriate international data transfer mechanism with its sub-processor.

What happens if compliance is not achieved?

As with any breach of the UK GDPR, if found to be non-compliant, the ICO will have the power to fine organisations (up to £17.5 million or 4% of total worldwide annual turnover, whichever is greater). However, fines at this level are envisaged for serious breaches that put personal data at risk.

Whilst it is considered unlikely that the ICO would actively impose fines for non-compliance with transfer rules immediately after the deadline has passed, the most sensible stance would be to try to address this issue sooner rather than later. As we have seen countless times, the ICO will generally look more favourably on organisations that are seen to be trying to comply with data protection legislation, as opposed to those that ignore the rules.

GRC LEGAL Commentary

UK entities, regardless of their capacity as a controller/data processor, are required to harmonise their old EU SCCs for international data transfers with the new EU SCCs by 21 March 2024. The sanctions to be imposed by the ICO for organisations that do not fulfil the relevant obligation are of interest.

Within the scope of the recent developments in our country, with the entry into force of the amendments made to the Personal Data Protection Law (“Law”) on data transfer abroad, transfer based on appropriate safeguards will be possible with the implementation of the SCCs to be announced by the Board. Unlike the GDPR, data controllers and data processors in our country are obliged to notify the Board for SCCs, which have been implemented in the EU for many years; otherwise, they will be subject to administrative fines.

Artificial Intelligence Law Adopted by Members of the European Parliament!

The Artificial Intelligence Act (the “Act”), which was adopted in December 2023 during negotiations with Member States, was approved by EP MEPs with 523 votes in favour, 46 against and 49 abstentions.

The Act aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, boost innovation and make Europe a leader in this field. In addition, it introduces obligations according to the potential risks of AI and the level of impact.

Prohibited Applications

The new rules prohibit certain AI applications that threaten citizens’ rights, including targetless scraping of facial images from the internet or CCTV footage to create biometric classification systems and facial recognition databases based on sensitive features.

Emotion recognition in the workplace and schools, social scoring, predictive law enforcement and artificial intelligence that manipulates human behaviour or exploits human vulnerabilities will also be banned.

Exceptions to the Provision of Law

The use of biometric identification systems (Remote Biometric Identification, “RBI”) by law enforcement agencies is in principle prohibited, except in comprehensively listed and narrowly defined cases. “Real-time” RBI can only be applied if strict security measures are met, e.g. its use is limited in time and geographical scope and is subject to specific prior judicial or administrative authorisation. Such uses may include, for example, the targeted search for a missing person or the prevention of a terrorist attack. The subsequent use of such systems is considered a high-risk use case and requires judicial authorisation in connection with a criminal offence.

Obligations for High Risk Systems

Explicit obligations are also envisaged for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law). Examples of high-risk uses of AI include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).

Such systems should assess and mitigate risks, keep records of usage, be transparent and accurate, and ensure human oversight. Citizens will have the right to lodge complaints about AI systems and demand explanations about decisions based on high-risk AI systems that affect their rights.

Transparency Requirements

General-purpose AI (“GPAI”) systems and the GPAI models on which they are based must meet certain transparency requirements, including compliance with EU copyright law and publication of detailed summaries of content used for training. More powerful GPAI models that may pose systemic risks will face additional requirements, including conducting model assessments, assessing and mitigating their systemic risks and reporting incidents.

In addition, artificial or altered images, audio or video content (“deepfake”) will need to be clearly labelled as such.

Measures to Support Innovation and SMEs

Regulatory sandboxes and real-world tests for the development and training of innovative AI before it is brought to market will need to be created at a national level and made accessible to SMEs and start-ups.

Next Steps

The legislation is still undergoing a final check by legal-linguists and is expected to be finalised before the end of the legislature. The law also needs to be formally approved by the Council.

The law will enter into force twenty days after its publication in the Official Gazette and will be fully enforceable after 24 months. However, limitations on prohibited applications will be enforceable six months after the date of entry into force, implementation rules nine months after entry into force, general-purpose AI rules, including governance, 12 months after entry into force, and obligations for high-risk systems 36 months after entry into force.

Background

The law responds directly to the citizens’ proposals at the Conference on the Future of Europe. Most concretely, it responds to proposal 12(10) to increase EU competitiveness in strategic sectors, to proposal 33(5) to create a safe and trustworthy society, fight misinformation and ultimately put people in control, to promote digital innovation, (3) ensuring human oversight and (8) responds directly to recommendation 35 on the reliable and responsible use of AI, setting safeguards and ensuring transparency, and 37(3) on the use of AI and digital tools to improve citizens’ access to information, including for persons with disabilities.

GRC LEGAL Commentary

Artificial intelligence applications, which have become widespread in today’s technological age, are undoubtedly in close contact with fundamental rights and freedoms and cause many violations. It can be said that the Law, which is organised for the first time in the world regarding artificial intelligence, attaches great importance to the protection of personal data and privacy of private life in line with the fact that it imposes serious obligations regarding high-risk systems and stipulates high administrative fines.

While it is a matter of curiosity how the implementation will be shaped with the entry into force of the Law, it is obvious that all technology companies should enter into full compliance by establishing a formal governance system, although there are question marks as to how far they can keep up with the rapidly developing technological developments.

Yazar Hakkında