What’s Happening in the World?

While the field of Data Protection is developing at an accelerating pace in our country, worldwide innovations continue to remain on the radar of the Personal Data Protection Authority (“Authority”).

From the examples we have repeatedly encountered before, we witness that the Authority keeps up with the world agenda, especially the European General Data Protection Regulation (“GDPR”) regulations, and tries to catch up with the requirements of the fast-moving data privacy world.

As GRC Legal Law Firm, we closely follow the world agenda and present a selection of the current developments for your information with this content.

The news below belongs to November 2022.

Twitter x Elon Musk

In line with the Federal Trade Commission’s (“FTC”) warning to Elon Musk’s Twitter that “no CEO or company is above the law”, the platform’s chief regulator in the European Union (“EU”) is continuing its case following the departure of senior staff responsible for security and compliance.

Graham Doyle, a representative of the Irish Data Protection Authority (“Data Protection Comission,” “DPC”), which oversees Twitter under the GDPR, stated that he had contacted Twitter following media reports of the resignation of Twitter’s data protection officer and that a meeting between the DPC and Twitter would be organised.

It was confirmed that Twitter had not reported the departure of its data protection officer prior to the media reports, and the main agenda of the meeting will be to clarify the details of this departure. It is rumoured that the DPC has other concerns and that this will not be the only agenda of the meeting.

According to Doyle, one of the issues to be discussed is the question of the parent organisation, Twitter is obliged to have a data protection officer and to provide details to the DPC, and equally, under the GDPR’s one-stop-shop (“OSS”) mechanism, a parent organisation is required to establish a relationship with a single regulator. In other words, the decision-making processes for processing EU data must also take place in that country. While this constitutes one of the main organisational principles, what is being sought to be established for Twitter is that this continues to apply to Twitter.

The fact that Ireland is Twitter’s lead regulator for GDPR under the OSS is important because Ireland’s data watchdog is the sole authority when it comes to whether to open an investigation or act on concerns about Twitter’s compliance.

From Twitter’s perspective, the arrangement is advantageous because it facilitates compliance as it only needs to liaise with one (lead) regulator on any issues, rather than having to deal with data from multiple data protection agencies (potentially in different languages).

Ireland has the role of lead controller for Twitter because Twitter calls its Dublin office the main establishment in the EU, in line with the GDPR’s qualifications of “place of central administration within the EU” or “place where the main processing activity takes place within the EU”.

However, if it is accepted that Twitter no longer has this processing base in Ireland, there will be an immediate reorganisation and data protection authorities across the bloc, from any of the EU’s 27 member states, will be able to initiate investigations or act on local complaints themselves. This has the consequence of increasing the speed and risk of regulatory complexity in Twitter’s European operations.

While Elon Musk cut 50% of Twitter’s headcount globally in November, questions were raised in Dublin about the stability of its parent organisation status for GDPR. According to the Irish Times report, this reduction is being called a massacre in the Irish office, with more than 50 per cent of local staff affected.

Reports that all was not well at senior levels of Twitter’s security and privacy function spread across Twitter yesterday afternoon.

Journalists Casey Newton and Zoë Schiffer reported that Twitter’s Chief Information Security Officer (“CISO”), Chief Privacy Officer (“CPO”) and Chief Compliance Officer (“CCO”) had resigned, citing messages shared on Twitter Slack.

Twitter CISO Lea Kissner confirmed the departure in a tweet, as did Twitter’s now former CPO Damien Kieran. Twitter’s reportedly former CCO, Marianne Fogarty, tweeted what could amount to an indirect confirmation, writing: “Therapy Thursdays have taken on new meaning. #LoveTwitter”

Since Musk’s takeover, inquiries to Twitter’s press branch have gone unanswered, so it has not been possible to get an official answer as to what is going on. The company’s communications department appears to be a major casualty of the 50 per cent headcount reduction that Musk quickly implemented during the takeover, with press staff either completely or almost completely laid off.

“The most important thing for us is that we are kept informed, that we know who the data protection officer is, that we have their contact details, that we can contact them whenever we need to contact them. According to the legislation, data protection officers don’t have to be in a geographically specific place,” Doyle said.

“We need to know who they are and we need to have all the details but the key point is that the decision-making to make use of the parent organisation has to take place mainly in the country where it is established. If that changes, all supervisory authorities will have regulatory authority if the decision-making does not take place in Ireland,” he added.

In a climate where most of Twitter’s core compliance staff have quit, surrounded Musk and continued with a team that can’t go beyond cheering his jokes, it seems highly doubtful that Elon Musk has the ability to understand the dangers posed to Twitter.

Musk also has a history of mocking regulators, so it is possible that he is extremely comfortable ignoring the implications for Twitter’s regulatory compliance, which would raise the DPC’s concerns and make the loss of its parent organisation status more likely.

Under the GDPR, organisations that process certain types of data (and/or process personal data on a sufficient scale, as Twitter does) are required to appoint a data protection officer, who is an independent expert, and to provide adequate resources for his or her work. Musk therefore has to fill Kieran’s shoes.

Although it may seem like a small problem, this incident, which resulted in his resignation and the departure of his senior compliance colleagues, may be the tip of the iceberg, signalling a privacy alarm.

The role of the data protection officer is to act as a contact person for data protection authorities, as well as to provide guidance for compiling Data Protection Impact Assessments and to advise on monitoring compliance with data protection obligations. Since people suitable for the position need to have expertise and independence, Musk will not be able to circumvent the problem by appointing just anyone to support him for the freewheeling ideas he plans to take.

Product development under Musk also looks like a compliance nightmare. Twitter’s blue ticker raised concerns that it could lead to identity theft issues, and criticism flared as soon as the new practice was launched. The fact that the relevant issue may cause information security risks and its rapid implementation is seen as contrary to the spirit of GDPR and its secondary regulations.

Since the rapid implementation of the blue tick does not allow for the reflection of the measures taken to assess the risks of implementation, the resignations of the CIPO, CPO and CCO are justified, perhaps on the grounds that they feel that they cannot do their job properly.

While failing to properly appoint a data protection officer or notify their resignation would not typically fall into this category, the penalties for non-compliance with GDPR can be up to 4% of global annual turnover for the most serious breaches. Therefore, it seems like a scenario that can be witnessed in the future, as Musk’s taking the issue seriously and sitting at the table with data protection authorities instead of solutions that can be called eye-colouring will also serve his concerns about profitability, which he is most sensitive to.

Google x Location Tracking

Google has agreed to a $391.5 million settlement with 40 state attorneys general regarding its location tracking practices. The settlement outlines how Google misled users into thinking they had turned off location tracking even as it continued to collect location information.

The investigation, which marks the largest attorney general-led consumer privacy settlement ever, was conducted jointly by Oregon and Washington.

Oregon Attorney General Ellen Rosenblum said in a news release, “For years, Google has prioritised profits over the privacy of its users. It has been cunning and deceptive in this regard. Consumers thought they had turned off location tracking features on Google, but the company continued to secretly record their movements and use that information for advertisers.”

Google said in a statement that it had already addressed and fixed some of the location tracking practices detailed in the settlement. “Consistent with the improvements we have made in recent years, we have resolved this investigation, which was based on old product policies that we changed years ago,” a Google spokesperson said.

As a requirement of the settlement, Google agreed to disclose location tracking and improve user controls starting next year. The settlement requires Google to show users additional information each time they enable or disable a location-related account setting. Important information about location tracking should also no longer be hidden going forward.

Google said in a blog post that it “will provide a new control that allows users to easily turn off location history and Web and Mobile App Activity settings and delete history data in one simple flow.” Google’s plans also include adding annotations to activity controls and data & privacy pages.

Alongside these changes, Google will create a comprehensive knowledge centre highlighting location settings and plans to provide users creating new accounts with a more detailed explanation of what Web and Mobile App Activity is and what information it includes. Google also said it will continue to delete location history data for users who have not recently added new location history data to their accounts.

“Until we have comprehensive privacy laws, companies will continue to compile large amounts of our personal data for marketing purposes with few controls,” says Ellen Rosenblum.

The Attorney General launched the Google investigation after a 2018 Associated Press Report found that user behaviour was being recorded, even though Google explicitly said it was not. The investigation found that Google had violated state consumer protection laws by misleading consumers about its location tracking practices since at least 2014.

Last month, Google agreed to pay $85 million to the state of Arizona to settle a separate lawsuit alleging that it deceived users by collecting location data without their consent. Google is also already facing a lawsuit from Washington, DC, Texas, Washington state and Indiana. The lawsuit alleges that Google deceived users by collecting location data even when they believed that such tracking had been disabled.

Russia x Pushwoosh

Thousands of mobile apps in Apple’s (AAPL.O) and Google’s (GOOGL.O) online stores contain computer code developed by a technology company called Pushwoosh that presents itself as being based in the United States (“US”) but is in fact of Russian origin, Reuters has found.

The Centers for Disease Control and Prevention (the “CDC”), the US’s main agency in combating major health threats, said it had been deceived into believing that Pushwoosh was based in the US capital. After learning of its Russian origins from Reuters, it removed Pushwoosh software from seven publicly available applications, citing security concerns.

The US Army announced last March that it had removed an application containing Pushwoosh code due to the same concerns. This application was used by soldiers at one of the country’s main combat training bases.

According to company documents made publicly available in Russia and reviewed by Reuters, Pushwoosh, which employs about 40 people, is registered as a software company that also performs data processing in the Siberian town of Novosibirsk. However, it reportedly generated revenues of 143,270,000 rubles ($2.4 million) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia.

According to Reuters, on social media and in US regulatory documents, Pushwoosh has at various times identified itself as a US company domiciled in California, Maryland and Washington, D.C.

Pushwoosh provides code and data processing support for software developers, allowing them to profile the online activity of mobile app users and send customised “push notifications” from Pushwoosh servers.

Pushwoosh says it does not collect sensitive information on its website, and Reuters says it has found no evidence of Pushwoosh misusing user data. However, Russian authorities are forcing local companies to hand over user data to local security agencies.

Pushwoosh founder Max Konev said in September that the company was not trying to mask its Russian origins, saying “I am proud to be Russian and I would never hide it.” Pushwoosh published a blog post after the Reuters article was published and made the following statement: “Pushwoosh Inc. is a private C-Corp corporation incorporated under the laws of the state of Delaware, USA. Pushwoosh Inc. has never been owned by any company registered in the Russian Federation.”

The company also stated in the blog post that Pushwoosh Inc. outsourced the development parts of the product to the Russian company in Novosibirsk mentioned in the article, but in February 2022, Pushwoosh Inc. terminated the contract. After Pushwoosh published the blog post, Reuters asked Pushwoosh to provide evidence for its claims, but the news agency’s requests went unanswered.

Max Konev said the company has no links to the Russian government and stores its data in the US and Germany. Cybersecurity experts said that storing data abroad would not prevent Russian intelligence agencies from pressurising a Russian company to disclose it to them.

Russia, whose ties with the West have deteriorated since it seized the Crimean Peninsula in 2014 and invaded Ukraine this year, is a world leader in hacking and cyber espionage, spying on foreign governments and industries to gain a competitive advantage, Western officials said.

Giant Database

The Pushwoosh code has been imputed to the practices of a wide range of international corporations, influential non-profit organisations and government agencies, from global consumer goods company Unilever Plc (ULVR.L) and the Union of European Football Associations (“UEFA”) to the politically powerful US gun lobby, the National Rifle Association (“NRA”) and Britain’s Labour Party.

Ten legal experts told Reuters that Pushwoosh’s business with US government agencies and private companies could violate contract and US Federal Trade Commission (FTC) laws or trigger sanctions. “These types of cases are within the jurisdiction of the FTC, which takes action against unfair or deceptive practices affecting US consumers,” said Jessica Rich, former director of the FTC’s Bureau of Consumer Protection.

Sanctions experts said Washington could choose to sanction Pushwoosh, and they have broad authority to do so.

According to Appfigures, a mobile app design and reporting service, Pushwoosh code has been embedded in about 8,000 apps in the Google and Apple app stores. Pushwoosh’s website says there are more than 2.3 billion devices listed in its database.

The privacy policy update, which applies to the United Kingdom, the European Economic Area and Switzerland and will be published on 2 December, will take place against the backdrop of political and regulatory pressures on the use of data generated by the app, which has more than a billion users worldwide.

Jerome Dangu, co-founder of Confiant, a firm that monitors the misuse of data collected in online advertising supply chains, said: “Pushwoosh collects user data, including sensitive geolocation data in sensitive and official applications, which could enable offensive tracking at scale.”

Security Issues

“CDC believed Pushwoosh was a company based in the Washington, D.C., area,” spokeswoman Kristen Nordlund said in a statement, adding that this belief was based on “representations” made by the company.

CDC apps containing the Pushwoosh code included the agency’s main app and others set up to share information about a wide range of health issues. One was for doctors treating sexually transmitted diseases. While the CDC also used the company’s notifications for health issues such as COVID, user data was said not to be shared with Pushwoosh.

The military told Reuters that it removed an app containing Pushwoosh in March citing “security issues”. However, it did not specify how widely the app, an information portal produced for use at the National Training Center (NTC) in California, was used by soldiers. US Army spokesperson Bryce Dubee said that the army did not experience any “operational data loss” and that the application was not connected to the army network.

Some large companies and organisations, such as UEFA and Unilever, said they thought third parties had installed the apps for them or hired a US company. “We have no direct relationship with Pushwoosh,” Unilever said in a statement, adding that Pushwoosh had been removed from one of its apps some time ago. UEFA said its contract with Pushwoosh was with a US company. UEFA declined to say whether it knew about Pushwoosh’s Russian ties but said it was reviewing its relationship with the company after being contacted by Reuters.

Zach Edwards, a security researcher who first noticed the prevalence of the Pushwoosh code while working for the non-profit organisation Internet Safety Labs, said: “The data that Pushwoosh collects is similar to data that might be collected by Facebook, Google or Amazon, but the difference is that all Pushwoosh data in the US is sent to servers controlled by a company in Russia (Pushwoosh).”

Fake Addresses and Profiles

Pushwoosh never mentions its Russian origins. The company lists Washington D.C. as its location on Twitter and claims its office address as a home in the suburb of Kensington, Maryland, according to the most recent U.S. regulatory documents Delaware filed with the secretary of state. It also lists the Maryland address on its Facebook and LinkedIn profiles.

The Kensington house is the home of a Russian friend of Max Konev’s who says that Max Konev has nothing to do with Pushwoosh and that Max Konev only allowed him to use his address to receive mail.

Max Konev said Pushwoosh began using the Maryland address to “receive business correspondence” during the coronavirus pandemic. He said he was currently operating Pushwoosh from Thailand, but provided no evidence that he was registered there. Reuters could not find a company registered by that name in the Thai company registry.

Pushwoosh never mentioned being based in Russia during the eight-year certification process in the US state of Delaware, where it is registered, instead listing an address in Union City, California as its main place of business from 2014 to 2016. According to Union City officials, no such address exists, an omission that could violate state law.

Pushwoosh used LinkedIn accounts purportedly belonging to two Washington, D.C.-based executives, Mary Brown and Noah O’Shea, to solicit sales. However, Reuters found that neither Brown nor O’Shea were real people.

The person in Brown’s photo was in fact an Austrian dance teacher taken by a photographer in Moscow who said he had no idea how she ended up on the site.

Konev admitted that the accounts were not real. He said Pushwoosh had engaged a marketing agency in 2018 to attempt to use social media to sell Pushwoosh, not to mask the company’s Russian sources. LinkedIn said it removed the accounts after being alerted by Reuters.

CNIL x Discord

France’s data protection authority CNIL (“Commission Nationale Informatique & Libertés”) has fined Discord Inc. (“Discord” or “the Company”) €800,000 for failure to comply with various obligations under the GDPR, in particular with respect to data retention periods and the security of personal data.

Discord, which is a very popular application today, is a US-based company that provides IP technology that allows users to create server, text, audio and video rooms and chat via microphone or webcam over the internet.

In the statements made after the investigation, it was reported that in determining the amount of the fine, in addition to the violations identified and the number of people affected, the Company’s efforts for full compliance throughout the procedure and the fact that its business model is not based on the misuse of personal data were taken into account.

The GDPR articles that caused the Company to be fined 800,000 euros and the mandatory actions taken by Discord within the scope of the investigation are as follows:

Failure to determine the retention period in accordance with the purpose of processing personal data and failure to comply with the retention periods (GDPR article 5/1-e)

The investigation found that the Company did not have a written data retention policy in place and that there were 2,474,000 French user accounts in the database that had been inactive for more than three years and 580,000 French user accounts that had been inactive for more than five years.

In addition, the Restricted Committee noted that the Company currently has in place a written data retention policy that includes a notice of deletion of the account after two years of user inactivity, and that this obligation under the GDPR was complied with during the investigation.

Failure to comply with the obligation to inform (Art. 13 GDPR)

Unlike Microsoft Windows, when a user who had opened a sound room “closed” the application by clicking the cross in Discord, they were not considered to have logged out of the application and the application continued to run in the background.

Therefore, even in the scenario where users thought they had left the sound room, they were still audible to other users in the background, and this application behaviour was not uncommon. In its investigation, the Restricted Committee emphasised the need to specifically inform users of this issue.

As part of the procedures carried out in this regard, Discord set up a pop-up window when a sound room was closed for the first time, informing the user that the application was still running and that this setting could be changed by the user at any time.

Failure to take ordinary security measures and appropriate technical/administrative measures for the purpose of processing personal data (Art. 25/2 GDPR, Art. 32 GDPR)

The Restricted Committee found that Discord’s password management policy was not sufficient to ensure users’ account security. At the time of the investigation, a six-character password consisting of letters and numbers was sufficient to create an account on Discord.

As a result of the investigation, Discord changed its password management policy, requiring users to set a password of at least eight characters, including at least three of the four character types (lowercase, uppercase, numbers, and special characters), and to solve a captcha (question and answer, for example, through a checkbox or image selection) after ten failed login attempts.

Failure to carry out a Data Protection Impact Assessment (Art. 35 GDPR)

While Discord explained in its defence that it did not consider a data protection impact assessment analysis necessary, the Restricted Committee emphasised the importance of conducting the analysis, noting factors such as the volume of data processed by the Company and the use of the service by minors.

As a result of this assessment, Discord carried out two impact assessments in the context of its service and the processing related to its core services, concluding that the processing is not likely to result in a high risk to the rights and freedoms of individuals.

Dutch Government x Meta

Minister of State Alexandra van Huffelen, who leads the work of the Council of Ministers in the field of digitalisation, announced that the Dutch government will stop using Facebook if it does not improve its handling of sensitive personal data.

In the report prepared by the company contracted to examine Facebook’s privacy policy, the government’s withdrawal from the social media platform in the coming days emerged as a possible option after it was stated that Facebook was unlikely to fulfil all the requirements.

In his statements, the Minister of State stated that Facebook does not adequately protect user data, it remains unclear how the data is processed and to which channels it is transferred, and there is a possibility that sensitive personal data may fall into the hands of security services in the USA, where Meta, the parent company of the platform, is located.

The Council of Ministers stated that they did not want to be responsible for the risks faced by Dutch citizens when visiting numerous government-owned Facebook pages. Van Huffelen invited the social media company to implement a long list of measures. The most important of these measures is to stop storing the data obtained by Dutch users when they view government Facebook pages in the USA and to destroy the data no later than one week after collection.

Following the discussion of the issue, Meta responded that the report does not provide an accurate guide to how Meta’s policies and tools work, emphasising that Facebook is very transparent in its handling of data.

Van Huffelen stated that there are also drawbacks to withdrawing from the use of Facebook. Considering that the widespread use of the platform is of great importance in terms of communication with citizens, it was reported that the extent of the consequences that may arise if Facebook ceases to be used will be investigated.

What Happened?

The Dutch government is not the first country to express the possibility of withdrawing from the use of Facebook. The German Data Protection Authority announced a similar decision at the beginning of 2022.

France x Microsoft & Google

The French Minister of National Education and Youth made statements regarding the non-use of free versions of Microsoft Office 365 and Google Workspace in schools, a position that comes as part of Europe’s ongoing concerns about cloud data security, competition and privacy rules.

In August, Philippe Latombe, a member of the French National Assembly, said that the free version of Microsoft Office 365, while attractive, amounted to a form of illegal storage, reminding the Minister of Education, Pap Ndiaye, of the data security issues associated with storing personal data on an American cloud service.

The Ministry of Education responded in a written statement, stating that free service offers are, in principle, outside the scope of public procurement and that paid versions of the cloud services in question are already available due to data security concerns.

Although the debate continues, French authorities have stated that Microsoft and Google cloud services that store data in the US are not compatible with European data regulations such as GDPR and the European Union Court of Justice’s 2020 “Schrems II” decision regulating cross-border data sharing, and that US laws lag behind European privacy standards.

What Happened?

On 15 September 2021, Nadi Bou Hanna, director of DINUM (“La Direction Interministérielle du Numérique”), France’s inter-ministerial digital department, notified French government agencies considering using cloud services to replace office and messaging products on government servers, such as Exchange, not to use Office 365 as it is not compatible with France’s “Cloud at the Centre” initiative.

In October, the Ministry of Education advised academies to refrain from any deployment of Office 365 or Google Workspace, citing DINUM’s position, the Prime Minister’s “Cloud at the Centre” policy and CNIL’s (National Commission For Informatics And Liberties) May 2021 note advising higher education institutions to use GDPR-compliant cloud collaboration services.

In 2019, German data protection authorities reached similar conclusions, removing the use of Microsoft Office 365 in classrooms in the state of Hesse.

While Google has not commented on the matter, in July, Microsoft announced Microsoft Cloud for Sovereignty, a service that will allow public sector customers to use Microsoft cloud services in a manner consistent with their policies.

In addition, Microsoft announced plans to launch the EU Data Cap by the end of 2022, where European Union (“EU”) customer data can be processed in accordance with data regulations.

Last year, Google launched a similar initiative to meet the EU’s data protection demands.

In October, President Biden signed an executive order directing US agencies to implement the Trans-Atlantic Data Privacy Framework.

It is expected that the EU will take a step to adopt these rules in the coming period. In this context, it is predicted that data transfers between the US and the EU will become more manageable, and platforms such as Microsoft and Google will make it easier to win state cloud contracts within Europe.

UK x Meta

A lawsuit filed in the High Court of England and Wales demanded that Meta’s Facebook social media platform stop collecting personal data for advertising and marketing purposes.

The lawsuit, brought by technology and human rights activist Tanya O’Carroll, stated that O’Carroll’s personal data was processed by Meta for marketing purposes and that Meta did not respect the right of individuals to object to tracking and profiling.

The complaint was based on the allegation that Meta violated the GDPR and that there was a serious disproportion between the types of personal data Meta collected and the processing activities it carried out on that personal data in order to select and deliver direct marketing materials to Facebook users.

The news comes just weeks after a Washington judge fined Meta $24.6 million for 822 willful violations of the state’s campaign finance transparency laws.

“Protecting the privacy and security of people’s data is fundamental to how our business operates,” Meta told a news website. That’s why we’ve invested heavily in features like Privacy Control and Ad Preferences, which provide greater transparency and control for people to understand and manage their privacy preferences.”

At the beginning of November, Meta announced that it would lay off more than 11,000 employees after dramatic profit and share price declines at the end of October, and it is currently unknown what Meta’s next step will be and what the response will be from the authorities on the subject.

Information Corner

In the UK, the GDPR is already in force and is referred to as the United Kingdom GDPR (“UK GDPR”). It is stated that this text is word-for-word identical to the GDPR implemented within the European Union, and the reason for this is stated as the concern that a serious deviation from the GDPR at this stage will affect data sharing adequacy decisions after Brexit.

The government’s proposed replacement for the UK GDPR, the Data Protection and Digital Information Bill (“DPDIB”), continues to be debated in Parliament after Michelle Donelan pledged to reduce data protection bureaucracy for a “new independent nation free from EU red tape”.

Google Fonts x GDPR

Warnings about Google Fonts (“Google Fonts”), which also reached the search engine giant’s headquarters, have been making the rounds in Europe for some time, and Google has now broken its silence on the matter. Since last summer, sceptical lawyers and their clients have caused a flood of warnings with complaints related to GDPR violations.

Anyone who does not integrate Google Fonts locally is guilty of such a GDPR violation, they say. Because if these fonts are not stored locally, the browser downloads them from an external server, which means going outside the local area.

According to a Munich court ruling from earlier this year, personal data is sent to the server associated with Google, i.e. to the USA. While there is much criticism of the legal ruling and the technical point of view, this development provides sceptical lawyers with the possibility of issuing a warning.

Google Fonts is a library of open source font families and an internet Application Programming Interface (“API”) that allows these font families to be embedded in websites.

Google has stated that it respects privacy: “The Google Fonts Web API is designed to limit the collection, storage and use of data to what is necessary for the efficient delivery of fonts and aggregate usage statistics. This data is kept secure and separate from other data.”

Google emphasises that the data will only be used for Google Fonts: “Google does not use the information collected by Google Fonts for any other purpose, and in particular not for profiling end users or advertising. In addition, the fact that Google servers necessarily receive IP addresses to transmit fonts is not unique to Google and merely reflects the way the internet works.”

Facebook x Tax Declaration Services

Tax filing service giants such as H&R Block, TaxAct and TaxSlayer allegedly quietly transmit sensitive financial information to Facebook when American citizens file their taxes online.

The data, sent through a widely used code called Meta Pixel, includes not only information such as names and email addresses, but also more detailed information such as users’ income, filing status, refund amounts and dependents’ college scholarship amounts.

The information sent to Facebook is collected regardless of whether the person using the tax filing service has an account on Facebook or other platforms operated by its owner Meta, and can be used to power the company’s advertising algorithms.

Each year, the Internal Revenue Service (the “IRS”) processes approximately 150 million electronically filed individual returns, and some of the most widely used e-filing services use Pixel.

For example, when users sign up to file their taxes on the popular TaxAct, they are asked to provide personal information, including how much money they make and their investments, to calculate their returns. A Pixel on TaxAct’s website then sends some of this data to Facebook, including users’ filing status, adjusted gross income and refund amounts.

TaxAct, which says it has about 3 million consumer and professional users, also uses Google’s analytics tool (“Google Analytics”) on its website, and news source The Markup found similar financial data sent to Google.

TaxAct is not the only tax filing service using the Meta Pixel; tax filing giant H&R Block, which also offers an electronic filing option that attracts millions of customers annually, has embedded a Pixel on its site that collects information on filers’ health savings account usage and dependents’ college tuition scholarships and expenses.

TaxSlayer, another widely used tax filing service, sent personal information to Facebook as part of Facebook’s “advanced matching” system, which collects information about Facebook visitors in an attempt to link them to Facebook accounts. The information collected through the Pixel on TaxSlayer’s site included phone numbers, the name of the user who filled out the form, and the names of dependents added to the return. TaxSlayer said it completed 10 million federal and state tax returns last year.

Pixel code on a tax return preparation site run by a financial advisory and software company called Ramsey Solutions, which uses a version of TaxSlayer’s service, collected even more personal data from a summary page, including information about income and refund amounts.

Even Intuit, the company that operates America’s industry-dominant online filing software, used Pixel. Intuit’s TurboTax sent Meta not financial information, but usernames and when a device was last logged in. In some cases, Pixel also collected information such as the order ID number and the user’s email address after logging in.

“We take the privacy of our customers’ data very seriously, TaxAct always strives to comply with all IRS regulations,” said TaxAct spokeswoman Nicole Coburn. H&R Block spokeswoman Angela Davied said the company regularly evaluates practices and will review information as part of its commitment to privacy.

Megan McConnell, a spokesperson for Ramsey Solutions, said that Meta Pixel was used to provide a more personalised customer experience, they did not know that personal data was being transferred, and as soon as they found out, they informed TaxSlayer about the issue and requested that it be deactivated.

TaxSlayer spokesperson Molly Richardson said the company removed the Pixel to assess its intended use. “Our clients’ privacy is of the utmost importance and we take concerns about our clients’ information very seriously,” she said, adding that Ramsey Solutions had also “decided to remove Pixel”.

After The Markup contacted TaxAct for comment, the company’s site no longer submitted financial details such as income and refund amount to Meta, though it continued to submit dependents’ names. The site still sends financial information to Google Analytics. TaxSlayer and Ramsey Solutions removed Pixel from their tax filing sites, and TurboTax stopped sending usernames through Pixel at login. H&R Block’s site continues to send information about health savings accounts and college tuition scholarships.

Meta offers the Pixel code for free to anyone who wants it, and allows businesses to embed it into their sites however they want. Using the code helps both Facebook and businesses.

Some of the sensitive data collection analysed by The Markup appears to be linked to Meta Pixel’s default behaviour, while some is due to customisations made by tax filing services, those acting on their behalf, or other software installed on the site.

The Pixel embedded by TaxSlayer and TaxAct includes a feature called “automatic advanced matching”. This feature scans forms looking for fields that it thinks contain personally identifiable information, such as phone number, first name, last name or email address, and sends the detected information to Meta.

The data collected by the matching feature is sent in a complex format known as a hash to “help protect user privacy”, but Meta is often able to detect a pre-blurred version of the data.

Meta collects so much data that sometimes it may not even realise where it’s going.

Earlier this year Vice reported on a leaked Facebook document written by Facebook privacy engineers that said the company “does not have sufficient control and accountability over how our systems use data”, making it difficult to promise not to use certain data for certain purposes.

Meta spokesman Dale Hogan pointed to the company’s rules on sensitive financial information, “Advertisers should not send sensitive information about people through our business tools. This is against our policies, and we train advertisers on setting up their business tools correctly to prevent this from happening. Our system is designed to filter out potentially sensitive data that it can detect.”

Google spokeswoman Jackie Berté said the company has strict policies against person-directed advertising based on sensitive information and that Google Analytics data is confidential, not tied to an individual, and that policies prohibit customers from sending emails containing data that could be used to identify a user.

The penalties for disclosing data without permission are potentially severe, with fines and even jail time possible, but Nina Olson, executive director of the nonprofit Centre for Taxpayer Rights, said she was not aware of any criminal cases being pursued.