All posts by MTechnology

Zoom Gets More Popular Despite Worries About Links to China

Very few companies can boast of having their name also used as a verb. Zoom is one of them. The popularity of the videoconferencing platform continues to grow around the world despite continued questions about whether Chinese authorities are monitoring the calls.

Since Zoom became a household word last year during the pandemic, internet users including companies and government agencies have asked whether the app’s data centers and staff in China are passing call logs to Chinese authorities.

“Some of the more informed know about that, but the vast majority, they don’t know about that, or even if they do, they really don’t give much thought about it,” said Jack Nguyen, partner at the business advisory firm Mazars in Ho Chi Minh City.

He said in Vietnam, for example, many people resent China over territorial spats, but Vietnamese tend to Zoom as willingly as they sign on to rivals such as Microsoft Teams. They like Zoom’s free 40 minutes per call, said Nguyen.

Whether to use the Silicon Valley-headquartered Zoom, now as before, comes down to a user-by-user calculation of the service’s benefits versus the possibility that call logs are being viewed in China, analysts say. China hopes to identify and stop internet content that flouts Communist Party interests.

The 10-year-old listed company officially named Zoom Video Communications reported over $1 billion in revenue in the April-June quarter this year, up 54% over the same quarter of 2020 when the COVID-19 pandemic drove face-to-face meetings online. In the same quarter, the most recent one detailed by the company, Zoom had 504,900 customers of more than 10 employees, up about 36% year on year.

Zoom commanded a 42.8% U.S. market share, leading competitors, as of May 2020, the news website LearnBonds reported. Its U.S. share was up to 55% by March this year, according to ToolTester Network data.

Tech media cite Zoom’s free 40 minutes and capacity for up to 100 call participants as major reasons for its popularity.

Links to China?

Keys that Zoom uses to encrypt and decrypt meetings may be sent to servers in China, Wired Business Media’s website Security Week has reported. Some encryption keys were issued by servers in China, news website WCCF Tech said.

Zoom did not answer VOA’s requests this month for comment.

Zoom has acknowledged keeping at least one data center and a staff employee in China, where the communist government requires resident tech firms to provide user data on request. In September 2019, the Chinese government turned off Zoom in China, and in April last year Zoom said international calls were routed in error through a China-based data center.

“Odds are high” of China getting records of Zoom calls, said Jacob Helberg, a senior adviser at the Stanford University Center on Geopolitics and Technology.

“If you have Zoom engineers in China who have access to the actual servers, from an engineering standpoint those engineers can absolutely have access to content of potential communications in China,” he said.

Zoom said in a statement in early April 2020 that certain meetings held by its non-Chinese users might have been “allowed to connect to systems in China, where they should not have been able to connect,” SmarterAnalyst.com reported.

Excitement and caution

Zoom said in 2019 it had put in place “strict geo-fencing procedures around our mainland China data center.”

“No meeting content will ever be routed through our mainland China data center unless the meeting includes a participant from China,” it said in a blog post.

Among the bigger users of Zoom is the University of California, a 10-campus system that switched to online learning in early 2020. Zoom was selected following a request for proposals “years” before the pandemic, a UC-Berkeley spokesperson told VOA on Thursday.

Elsewhere in the United States, NASA has banned employees from using Zoom, and the Senate has urged its members to avoid it because of security concerns. The German Foreign Ministry and Australian Defense Force restrict use as well, while Taiwan barred Zoom for government business last year. China claims sovereignty over self-ruled Taiwan, which has caused decades of political hostility.

“For Taiwan, there’s still some doubt,” said Brady Wang, a Taipei analyst with the market intelligence firm Counterpoint Research, referring particularly to Zoom’s encryption software. “And in the final analysis, these kinds of choices are numerous, so it’s not like you must rely on Zoom.”

LinkedIn’s withdrawal from China announced this month may spark new scrutiny over Zoom, said Zennon Kapron, founder and director of Kapronasia, a Shanghai financial industry research firm.

“I think when you look at the other technology players that are currently in China or that have relations to China such as Zoom, there will be a renewed push probably by consumers, businesses and even regulators in some jurisdictions to really try to understand and pry apart what the roles of Chinese suppliers or development houses are in developing some of these platforms and the potential security risks that go with them,” Kapron said.

Facebook Dithered in Curbing Divisive User Content in India

Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.

The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, the BJP, are involved.

Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at Facebook headquarters.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has “reduced the amount of hate speech that people see by half” in 2021. 

“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said. 

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.

In February 2019 and ahead of a general election when concerns about misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform.

The employee created a test user account and kept it live for three weeks, during which an extraordinary event shook India — a militant attack in disputed Kashmir killed more than 40 Indian soldiers, bringing the country to near war with rival Pakistan.

In a report, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee, whose name is redacted, said they were shocked by the content flooding the news feed, which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”

“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.

Other research files on misinformation in India highlight just how massive a problem it is for the platform.

In January 2019, a month before the test user experiment, another assessment raised similar alarms about misleading content. 

In a presentation circulated to employees, the findings concluded that Facebook’s misinformation tags weren’t clear enough for users, underscoring that it needed to do more to stem hate speech and fake news. Users told researchers that “clearly labeling information would make their lives easier.”

Alongside misinformation, the leaked documents reveal another problem dogging Facebook in India: anti-Muslim propaganda, especially by Hindu-hardline groups.

India is Facebook’s largest market with over 340 million users — nearly 400 million Indians also use the company’s messaging service WhatsApp. But both have been accused of being vehicles to spread hate speech and fake news against minorities.

In February 2020, these tensions came to life on Facebook when a politician from Modi’s party uploaded a video on the platform in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t. Violent riots erupted within hours, killing 53 people. Most of them were Muslims. Only after thousands of views and shares did Facebook remove the video.

In April, misinformation targeting Muslims again went viral on its platform as the hashtag “Coronajihad” flooded news feeds, blaming the community for a surge in COVID-19 cases. The hashtag was popular on Facebook for days but was later removed by the company.

The misinformation triggered a wave of violence, business boycotts and hate speech toward Muslims.

Criticisms of Facebook’s handling of such content were amplified in August of last year when The Wall Street Journal published a series of stories detailing how the company had internally debated whether to classify a Hindu hard-line lawmaker close to Modi’s party as a “dangerous individual” — a classification that would ban him from the platform — after a series of anti-Muslim posts from his account.

The documents also show how the company’s South Asia policy head herself had shared what many felt were Islamophobic posts on her personal Facebook profile. 

Months later the India Facebook official quit the company. Facebook also removed the politician from the platform, but documents show many company employees felt the platform had mishandled the situation, accusing it of selective bias to avoid being in the crosshairs of the Indian government.

As recently as March this year, the company was internally debating whether it could control the “fear mongering, anti-Muslim narratives” pushed by Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group that Modi is also a part of, on its platform.

In one document titled “Lotus Mahal,” the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. 

Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.

Apple Updates App Store Payment Rules in Concession to Developers

Apple has updated its App Store rules to allow developers to contact users directly about payments, a concession in a legal settlement with companies challenging its tightly controlled marketplace.

According to App Store rules updated Friday, developers can now contact consumers directly about alternate payment methods, bypassing Apple’s commission of 15 or 30%.

They will be able to ask users for basic information, such as names and e-mail addresses, “as long as this request remains optional”, said the iPhone maker.

Apple proposed the changes in August in a legal settlement with small app developers.

But the concession is unlikely to satisfy firms like “Fortnite” developer Epic Games, with which the tech giant has been grappling in a drawn-out dispute over its payments policy.  

Epic launched a case aiming to break Apple’s grip on the App Store, accusing the iPhone maker of operating a monopoly in its shop for digital goods or services.

In September, a judge ordered Apple to loosen control of its App Store payment options, but said Epic had failed to prove that antitrust violations had taken place.

For Epic and others, the ability to redirect users to an out-of-app payment method is not enough: it wants players to be able to pay directly without leaving the game.

Both sides have appealed. 

Apple is also facing investigations from US and European authorities that accuse it of abusing its dominant position.

Another Whistleblower Accuses Facebook of Wrongdoing: Report

A former Facebook worker reportedly told U.S. authorities Friday the platform has put profits before stopping problematic content, weeks after another whistleblower helped stoke the firm’s latest crisis with similar claims.

The unnamed new whistleblower filed a complaint with the U.S. Securities and Exchange Commission, the federal financial regulator, that could add to the company’s woes, said a Washington Post report.

Facebook has faced a storm of criticism over the past month after former employee Frances Haugen leaked internal studies showing the company knew of potential harm fueled by its sites, prompting U.S. lawmakers to renew a push for regulation.

In the SEC complaint, the new whistleblower recounts alleged statements from 2017, when the company was deciding how to handle the controversy related to Russia’s interference in the 2016 U.S. presidential election.  

“It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine,” Tucker Bounds, a member of Facebook’s communications team, was quoted in the complaint as saying, The Washington Post reported.  

The second whistleblower signed the complaint on October 13, a week after Haugen’s testimony before a Senate panel, according to the report.

Haugen told lawmakers that Facebook put profits over safety, which led her to leak reams of internal company studies that underpinned a damning Wall Street Journal series.

The Washington Post reported the new whistleblower’s SEC filing claims the social media giant’s managers routinely undermined efforts to combat misinformation and other problematic content for fear of angering then-U.S. President Donald Trump or for turning off the users who are key to profits.

Erin McPike, a Facebook spokesperson, said the article was “beneath the Washington Post, which during the last five years would only report stories after deep reporting with corroborating sources.”  

Facebook has faced previous firestorms of controversy, but they did not translate into substantial U.S. legislation to regulate social media.

China’s Reach Into Africa’s Digital Sector Worries Experts

Chinese companies like Huawei and the Transsion group are responsible for much of the digital infrastructure and smartphones used in Africa. Chinese phones built in Africa come with already installed apps for mobile money transfer services that increase the reach of Chinese tech companies. But while many Africans may find the availability of such technology useful, the trend worries some experts on data management.

China has taken the lead in the development of Africa’s artificial intelligence and communication infrastructure. 

In July 2020, Cameroon contracted with Huawei, a Chinese telecommunication infrastructure company, to equip government data centers. In 2019, Kenya was reported to have signed the same company to deliver smart city and surveillance technology worth $174 million. 

A study by the Atlantic Council, a U.S.-based think tank, found that Huawei has developed 30% of the 3G network and 70% of the 4G network in Africa. 

Eric Olander is the managing editor of the Chinese Africa Project, a media organization examining China’s engagement in Africa. He says Chinese investment is helping Africa grow.

“The networking equipment is really what is so vital and what the Chinese have been able to do with Huawei, in particular, is they bring the networking infrastructure together with state-backed loans and that’s the combination that has proven to be very effective. So, a lot of governments that would not be able to afford 4G and 5G network upgrades are able to get these concessional loans from the China Exim Bank that are used and to purchase Huawei equipment,” Olander said.

Data compiled by the Australian Strategic Policy Institute, a Canberra-based defense and policy research organization, show China has built 266 technology projects in Africa ranging from 4G and 5G telecommunications networks to data centers, smart city projects that modernize urban centers and education programs.  

But while the new technology has helped modernize the African continent, some say it comes at a cost that is not measured in dollars. 

China loaned the Ethiopian government more than $3 billion to be used to upgrade its digital infrastructure. Critics say the money helped Ethiopia expand its authoritarian rule and monitor telecom network users. 

According to an investigation by The Wall Street Journal, Huawei technology helped the Ugandan and Zambian governments spy on government critics.  In 2019, Uganda procured millions of dollars in closed circuit television surveillance technology from Huawei, ostensibly to help control urban crime.

Police in the East African nation admitted to using the system’s facial recognition ability supplied by Huawei to arrest more than 800 opposition supporters last year.

Bulelani Jili, a cybersecurity fellow at the Belfer Center at Harvard University, says African citizens must be made aware of the risks in relations with Chinese tech companies.

“There is need [for] greater public awareness and attention to this issue in part because it’s a key metric surrounding both development but also the kind of Africa-China relations going forward…. We should also be thinking about data sovereignty is going to be a key factor going forward.” 

Jili said data sharing will create more challenges for relations between Africa and China. 

“There are security questions about data, specifically how it’s managed, who owns it, and how governments depend on private actors to provide them the technical capacity to initiate certain state services.”  

London-based organization Privacy International says at least 24 African countries have laws that protect the personal data of their citizens. But experts say most of those laws are not enforced. 

Facebook Kept Oversight Board in Dark about Special Treatment of VIP Accounts

Facebook’s quasi-independent oversight board criticized the company Thursday, saying many high-profile accounts such as celebrities and politicians are not held to the same standards as other accounts.

In a blog post, the board said, “Facebook has not been fully forthcoming with the Board on its ‘Cross-Check’ system, which the company uses to review content decisions relating to high-profile users.”

The Wall Street Journal had previously reported about the company’s double standards, and that 5.8 million accounts fell under the Cross-Check system.

“At times, the documents show, [Cross-Check] has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users,” the Journal reported.

Facebook spokesman Andy Stone told the Journal that Cross-Check “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.”

The board said Facebook kept it in the dark about the existence of Cross-Check.

“When Facebook referred the case related to former U.S. President Trump to the Board, it did not mention the cross-check system,” the board wrote. “Given that the referral included a specific policy question about account-level enforcement for political leaders, many of whom the Board believes were covered by cross-check, this omission is not acceptable.”

“Facebook only mentioned cross-check to the Board when we asked whether Mr. Trump’s page or account had been subject to ordinary content moderation processes.”

The board urged Facebook to provide greater transparency.

The board was created last October after the company faced criticism it was not quickly and effectively dealing with what some feel is problematic content.

Decisions by the board are binding and cannot be overturned. 

 

Some information in this report comes from Reuters.

New Name for Facebook? Critics Cry Smoke and Mirrors

Facebook critics pounced Wednesday on a report that the social network plans to rename itself, arguing it may be seeking to distract from recent scandals and controversy.

The report from tech news website The Verge, which Facebook refused to confirm, said the embattled company was aiming to show its ambition to be more than a social media site.

But an activist group calling itself The Real Facebook Oversight Board warned that major industries like oil and tobacco had rebranded to “deflect attention” from their problems.

“Facebook thinks that a rebrand can help them change the subject,” said the group’s statement, adding the real issue was the need for oversight and regulation.

Facebook spokesman Andy Stone told AFP: “We don’t have any comment and aren’t confirming The Verge’s report.”

The Verge cited an unnamed source noting the name would reflect Facebook’s efforts to build the “metaverse,” a virtual reality version of the internet that the tech giant sees as the future.

Facebook on Monday announced plans to hire 10,000 people in the European Union to build the metaverse, with CEO Mark Zuckerberg emerging as a leading promoter of the concept.

Fallout

The announcement comes as Facebook grapples with the fallout of a damaging scandal, major outages of its services and rising calls for regulation to curb its vast influence.

The company has faced a storm of criticism over the past month after former employee Frances Haugen leaked internal studies showing Facebook knew its sites could be harmful to young people’s mental health.

The Washington Post last month suggested that Facebook’s interest in the metaverse is “part of a broader push to rehabilitate the company’s reputation with policymakers and reposition Facebook to shape the regulation of next-wave internet technologies.”

Silicon Valley analyst Benedict Evans argued a rebranding would ignore fundamental problems with the platform.

“If you give a broken product a new name, people will quite quickly work out that this new brand has the same problems,” he tweeted.

“A better ‘rebrand’ approach is generally to fix the problem first and then create a new brand reflecting the new experience,” he added.

Google rebranded itself as Alphabet in a corporate reconfiguration in 2015, but the online search and ad powerhouse remains its defining unit despite other operations such as Waymo self-driving cars and Verily life sciences.

Facebook to Pay Up to $14 Million Over Discrimination Against US Workers 

Facebook must pay a $4.75 million fine and up to $9.5 million in back pay to eligible victims who say the company discriminated against U.S. workers in favor of foreign ones, the Justice Department announced Tuesday. 

The discrimination took place from at least January 1, 2018, until at least September 18, 2019. 

The Justice Department said Facebook “routinely refused” to recruit or consider U.S. workers, including U.S. citizens and nationals, asylees, refugees and lawful permanent residents, in favor of temporary visa holders. Facebook also helped the visa holders get their green cards, which allowed them to work permanently 

In a separate settlement, the company also agreed to train its employees in anti-discrimination rules and conduct wider searches to fill jobs. 

The fines and back pay are the largest civil awards ever given by the DOJ’s civil rights division in its 35-year history. 

“Facebook is not above the law and must comply with our nation’s civil rights laws,” Assistant Attorney General Kristen Clarke told reporters in a telephone conference. 

“While we strongly believe we met the federal government’s standards in our permanent labor certification [PERM] practices, we’ve reached agreements to end the ongoing litigation and move forward with our PERM program, which is an important part of our overall immigration program,” a Facebook spokesperson said in a statement. “These resolutions will enable us to continue our focus on hiring the best builders from both the U.S. and around the world and supporting our internal community of highly skilled visa holders who are seeking permanent residence.” 

Some information in this report came from the Associated Press.

US Puts Cryptocurrency Industry on Notice Over Ransomware Attacks 

Suspected ransomware payments totaling $590 million were made in the first six months of this year, more than the $416 million reported for all of 2020, U.S. authorities said on Friday, as Washington put the cryptocurrency industry on alert about its role in combating ransomware attacks. 

The U.S. Treasury Department said the average amount of reported ransomware transactions per month in 2021 was $102.3 million, with REvil/Sodinokibi, Conti, DarkSide, Avaddon, and Phobos the most prevalent ransomware strains reported. 

President Joe Biden has made the government’s cybersecurity response a top priority for the most senior levels of his administration following a series of attacks this year that threatened to destabilize U.S. energy and food supplies. 

Avoiding  U.S. sanctions

Seeking to stop the use of cryptocurrencies in the payment of ransomware demands, Treasury told members of the crypto community they are responsible for making sure they do not directly or indirectly help facilitate deals prohibited by U.S. sanctions. 

Its new guidance said the industry plays an increasingly critical role in preventing those blacklisted from exploiting cryptocurrencies to evade sanctions. 

“Treasury is helping to stop ransomware attacks by making it difficult for criminals to profit from their crimes, but we need partners in the private sector to help prevent this illicit activity,” Deputy Treasury Secretary Wally Adeyemo said in a statement. 

The new guidance also advised cryptocurrency exchanges to use geolocation tools to block access from countries under U.S. sanctions. 

Hackers use ransomware to take down systems that control everything from hospital billing to manufacturing. They stop only after receiving hefty payments, typically in cryptocurrency. 

Large scale hacks

This year, gangs have hit numerous U.S. companies in large scale hacks. One such attack on pipeline operator Colonial Pipeline led to temporary fuel supply shortages on the U.S. East Coast. Hackers also targeted an Iowa-based agricultural company, sparking fears of disruptions to grain harvesting in the Midwest. 

The Biden administration last month unveiled sanctions against cryptocurrency exchange Suex OTC, S.R.O. over its alleged role in enabling illegal payments from ransomware attacks, officials said, in the Treasury’s first such move against a cyptocurrency exchange over ransomware activity.

US Authorities Disclose Ransomware Attacks Against Water Facilities

U.S. authorities said on Thursday that four ransomware attacks had penetrated water and wastewater facilities in the past year, and they warned similar plants to check for signs of intrusions and take other precautions. 

The alert from the Cybersecurity and Infrastructure Security Agency (CISA) cited a series of apparently unrelated hacking incidents from September 2020 to August 2021 that used at least three different strains of ransomware, which encrypts computer files and demands payment for them to be restored. 

Attacks at an unnamed Maine wastewater facility three months ago and one in California in August moved past desktop computers and paralyzed the specialized supervisory control and data acquisition (SCADA) devices that issue mechanical commands to the equipment. 

The Maine system had to turn to manual controls, according to the alert co-signed by the FBI, National Security Agency and Environmental Protection Agency. 

A March hack in Nevada also reached SCADA devices that provided operational visibility but could not issue commands. 

CISA said it is seeing increasing attacks on many forms of critical infrastructure, in line with those on the water plants. 

In some cases, the water facilities are handicapped by low municipal spending on technology cybersecurity. 

The Department of Homeland Security agency’s recommendations include access log audits and strict use of additional factors for authentication beyond passwords.  

Facebook Objects to Releasing Private Posts About Myanmar’s Rohingya Campaign

Facebook was used to spread disinformation about the Rohingya, the Muslim ethnic minority in Myanmar, and in 2018 the company began to delete posts, accounts and other content it determined were part of a campaign to incite violence. 

That deleted but stored data is at issue in a case in the United States over whether Facebook should release the information as part of a claim in international court. 

Facebook this week objected to part of a U.S. magistrate judge’s order that could have an impact on how much data internet companies must turn over to investigators examining the role social media played in a variety of international incidents, from the 2017 Rohingya genocide in Myanmar to the 2021 Capitol riot in Washington. 

The judge ruled last month that Facebook had to give information about these deleted accounts to Gambia, the West African nation, which is pursuing a case in the International Court of Justice against Myanmar, seeking to hold the Asian nation responsible for the crime of genocide against the Rohingya.

But in its filing Wednesday, Facebook said the judge’s order “creates grave human rights concerns of its own, leaving internet users’ private content unprotected and thereby susceptible to disclosure — at a provider’s whim — to private litigants, foreign governments, law enforcement, or anyone else.” 

The company said it was not challenging the order when it comes to public information from the accounts, groups and pages it has preserved. It objects to providing “non-public information.” If the order is allowed to stand, it would “impair critical privacy and freedom of expression rights for internet users — not just Facebook users — worldwide, including Americans,” the company said. 

Facebook has argued that providing the deleted posts is in violation of U.S. privacy, citing the Stored Communications Act, the 35-year-old law that established privacy protections in electronic communication. 

Deleted content protected? 

In his September decision, U.S. Magistrate Judge Zia M. Faruqui said that once content is deleted from an online service, it is no longer protected.

Paul Reichler, a lawyer for Gambia, told VOA that Facebook’s concern about privacy is misplaced. 

“Would Hitler have privacy rights that should be protected?” Reichler said in an interview with VOA. “The generals in Myanmar ordered the destruction of a race of people. Should Facebook’s business interests in holding itself out as protecting the privacy rights of these Hitlers prevail over the pursuit of justice?” 

But Orin Kerr, a law professor at the University of California at Berkeley, said on Twitter that the judge’s ruling erred and that the implication of the ruling is that “if a provider moderates contents, all private messages and emails deleted can be freely disclosed and are no longer private.”

The 2017 military crackdown on the Rohingya resulted in more than 700,000 people fleeing their homes to escape mass killings and rapes, a crisis that the United States has called “ethnic cleansing.”

‘Coordinated inauthentic behavior’ 

Human rights advocates say Facebook had been used for years by Myanmar officials to set the stage for the crimes against the Rohingya. 

Frances Haugen, the former Facebook employee who testified about the company in Congress last week, said Facebook’s focus on keeping users engaged on its site contributed to “literally fanning ethnic violence” in countries. 

In 2018, Facebook deleted and banned accounts of key individuals, including the commander in chief of Myanmar’s armed forces and the military’s television network, as well as 438 pages, 17 groups and 160 Facebook and Instagram accounts — what the company called “coordinated inauthentic behavior.” The company estimated 12 million people in Myanmar, a nation of 54 million, followed these accounts. 

Facebook commissioned an independent human rights study  of its role that concluded that prior to 2018, it indeed failed to prevent its service “from being used to foment division and incite offline violence.” 

Facebook kept the data on what it deleted for its own forensic analysis, the company told the court. 

The case comes at a time when law enforcement and governments worldwide increasingly seek information from technology companies about the vast amount of data they collect on users. 

Companies have long cited privacy concerns to protect themselves, said Ari Waldman, a professor of law and computer science at Northeastern University. What’s new is the vast quantity of data that companies now collect, a treasure trove for investigators, law enforcement and government. 

“Private companies have untold amounts of data based on the commodification of what we do,” Waldman said.

Privacy rights should always be balanced with other laws and concerns, such as the pursuit of justice, he added.

Facebook working with the IIMM 

In August 2020, Facebook confirmed that it was working with the Independent Investigative Mechanism for Myanmar (IIMM), a United Nations-backed group that is investigating Myanmar. The U.N. Human Rights Council established the IIMM, or “Myanmar Mechanism,” in September 2018 to collect evidence of the country’s most serious international crimes.

Recently, IIMM told VOA it has been meeting regularly with Facebook employees to gain access to information on the social media network related to its ongoing investigations in the country. 

A spokesperson for IIMM told VOA’s Burmese Service that Facebook “has agreed to voluntarily provide some, but not all, of the material the Mechanism has requested.” 

IIMM head Nicholas Koumjian wrote to VOA that the group is seeking material from Facebook “that we believe is relevant to proving criminal responsibility for serious international crimes committed in Myanmar that fall within our mandate.”  

Facebook told VOA in an email it is cooperating with the U.N. Myanmar investigators. 

“We’ve committed to disclose relevant information to authorities, and over the past year we’ve made voluntary, lawful disclosures to the IIMM and will continue to do so as the case against Myanmar proceeds,” the spokesperson wrote. The company has made what it calls “12 lawful data disclosures” to the IIMM but didn’t provide details. 

Human rights activists are frustrated that Facebook is not doing more to crack down on bad actors who are spreading hate and disinformation on the site.

“Look, I think there are many people at Facebook who want to do the right thing here, and they are working pretty hard,” said Phil Robertson, who covers Asia for Human Rights Watch. “But the reality is, they still need to escalate their efforts. I think that Facebook is more aware of the problems, but it’s also in part because so many people are telling them that they need to do better.” 

Matthew Smith of the human rights organization Fortify Rights, which closely tracked the ethnic cleansing campaign in Myanmar, said the company’s business success indicates it could do a better job of identifying harmful content. 

“Given the company’s own business model of having this massive capacity to deal with massive amounts of data in a coherent and productive way, it stands to reason that the company would absolutely be able to understand and sift through the data points that could be actionable,” Smith said. 

Gambia has until later this month to respond to Facebook’s objections.

Microsoft to Shut Down LinkedIn in China Over Censorship Concerns

Microsoft will close LinkedIn in China later this year, the company announced Thursday.

The professional networking site, which started operating in China in 2014, faces a “significantly more challenging operating environment and greater compliance requirements” in the country, it said in a blog post.

“We recognized that operating a localized version of LinkedIn in China would mean adherence to requirements of the Chinese government on Internet platforms,” the company said. “While we strongly support freedom of expression, we took this approach in order to create value for our members in China and around the world.”

However, it seems China’s regulatory burdens have become too much.

Chinese regulators told the company it had to better police content earlier this year, The Wall Street Journal reported. The company began blocking some content and profiles Chinese regulators prohibited, including profiles of journalists.

“While we’ve found success in helping Chinese members find jobs and economic opportunity, we have not found that same level of success in the more social aspects of sharing and staying informed,” LinkedIn said.

LinkedIn is not completely leaving the Chinese market. It will now offer something called InJobs, which will not have a social feed and will not allow users to share content, Reuters reported.

LinkedIn was the only U.S.-based social networking site still available to Chinese users.

Microsoft bought the company in 2016, and the site now boasts 774 million users.

Some information in this report comes from Reuters.

Forum Urges Social Networks to Act Against Antisemitism

Social media giants were urged to act Wednesday to stem online antisemitism during an international conference in Sweden focused on the growing amount of hatred published on many platforms. 

The Swedish government invited social media giants TikTok, Google and Facebook along with representatives from 40 countries, the United Nations and Jewish organizations to the event designed to tackle the rising global scourge of antisemitism.

Sweden hosted the event in the southern city of Malmo, which was a hotbed of antisemitic sentiment in the early 2000s but which during World War II welcomed Danish Jews fleeing the Nazis and inmates rescued from concentration camps in 1945.

“What they see today in social media is hatred,” World Jewish Congress head Ronald Lauder told the conference. 

Google told the event, officially called the International Forum on Holocaust Remembrance and Combating Anti-Semitism, that it was earmarking 5 million euros ($5.78 million) to combat antisemitism online. 

“We want to stop hate speech online and ensure we have a safe digital environment for our citizens,” French President Emmanuel Macron said in a prerecorded statement.

European organizations accused tech companies of “completely failing to address the issue,” saying antisemitism was being repackaged and disseminated to a younger generation through platforms like Instagram and TikTok. 

Antisemitic tropes are “rife across every social media platform,” according to a study linked to the conference that was carried out by three nongovernmental organizations. 

Hate speech remains more prolific and extreme on sites such as Parler and 4chan but is being introduced to young users on mainstream platforms, the study said. 

On Instagram, where almost 70% of global users are aged 13 to 34, there are millions of results for hashtags relating to antisemitism, the research found. 

On TikTok, where 69% of users are aged 16 to 24, it said a collection of three hashtags linked to antisemitism were viewed more than 25 million times in six months. 

In response to the report, a Facebook spokesperson said antisemitism was “completely unacceptable” and that its policies on hate speech and Holocaust denial had been tightened. 

A TikTok spokesperson said the platform “condemns antisemitism” and would “keep strengthening our tools for fighting antisemitic content.” 

According to the EU’s Fundamental Rights Agency, 9 out of 10 Jews in the EU say antisemitism has risen in their country and 38% have considered emigrating because they no longer feel safe. 

“Antisemitism takes the shape of extreme hatred on social networks,” said Ann Katina, the head of the Jewish Community of Malmo organization that runs two synagogues. 

“It hasn’t just moved there, it has grown bigger there,” she told AFP. 

Swedish Prime Minister Stefan Lofven has made the fight against antisemitism one of his last big initiatives before leaving office next month and has vowed better protection for Sweden’s 15,000-20,000 Jews. 

Reports of antisemitic crimes in the Scandinavian country rose by more than 50% between 2016 and 2018, from 182 to 278, according to the latest statistics available from the Swedish National Council for Crime Prevention. 

The Jewish community in Malmo has fluctuated over the years, from more than 2,000 in 1970 to just more than 600 now. 

In the early 2000s, antisemitic attacks in Malmo made global headlines. Incidents included verbal insults, assaults and Molotov cocktails thrown at the synagogue.

In response, authorities vowed to boost police resources and increase funding to protect congregations under threat. 

Mirjam Katzin, who coordinates antisemitism efforts in Malmo schools, the only such position in Sweden, said there was “general concern” among Jews in the city. 

“Some never experience any abuse, while others will hear the word ‘Jew’ used as an insult, jokes about Hitler or the Holocaust or various conspiracy theories,” she said. 

 

US Staging Global Conference to Combat Ransomware Attacks

The White House is holding a two-day international conference starting Wednesday to combat ransomware computer attacks on business operations across the globe that cost companies, schools and health services an estimated $74 billion in damages last year.

U.S. officials are meeting on Zoom calls with their counterparts from at least 30 countries to discuss ways to combat the clandestine attacks. Russia, a key launchpad for many of the attacks, was left off the invitation list as Washington and Moscow officials engage directly on attacks coming from Russia.

This year has seen an epidemic of ransomware attacks in which hackers from distant lands remotely lock victims’ computers and demand large extortion payments to allow normal operations to resume.

Ransomware payments topped $400 million globally in 2020, the United States says, and totaled more than $81 million in the first quarter of 2021.

Two U.S. businesses, the Colonial Pipeline Company that delivers fuel to much of the eastern part of the country and the JBS global beef producer, were targeted in major ransomware attacks in May.

Colonial paid $4.4 million in ransom demands, although U.S. government officials were soon able to surreptitiously recover $2.3 million of the payment. JBS said it paid an $11 million demand.

Other U.S. companies were also attacked, including CNA Financial, one of the country’s biggest insurance carriers; Applus Technologies, which provides testing equipment to state vehicle inspection stations; ExaGrid, a backup storage vendor that helps businesses recover after ransomware attacks; and the school system in the city of Buffalo, New York.

Attackers have also targeted victims in other countries, including Ireland’s health care system, the Taiwan-based computer manufacturer Acer and the Asia division of the AXA France cyber insurer.

A senior White House official, briefing reporters ahead of the ransomware conference, said the U.S. views the meetings “as the first of many conversations” on ways to combat the attacks.

At a summit in Geneva in June, U.S. President Joe Biden and Russian President Vladimir Putin created a working group of experts to deal with ransomware attacks.

“We do look to the Russian government to address ransomware criminal activity coming from actors within Russia,” the White House official said. “I can report that we’ve had, in the experts group, frank and professional exchanges in which we’ve communicated those expectations. We’ve also shared information with Russia regarding criminal ransomware activity being conducted from its territory.”

“We’ve seen some steps by the Russian government and are looking to see follow-up actions,” the official said, without elaborating.

While U.S. officials say they know the identity of some of the attackers in Russia, Moscow does not extradite its citizens for criminal prosecutions.

One of the major topics at the conference, the Biden official said, will be how countries can cooperate to trace and disrupt criminal use of cryptocurrencies like Bitcoin.

The countries scheduled to join the U.S. at the ransomware conference are Australia, Brazil, Bulgaria, Canada, the Czech Republic, the Dominican Republic, Estonia, France, Germany, India, Ireland, Israel, Italy, Japan, Kenya, Lithuania, Mexico, the Netherlands, New Zealand, Nigeria, Poland, the Republic of Korea, Romania, Singapore, South Africa, Sweden, Switzerland, Ukraine, the United Arab Emirates and the United Kingdom. The European Union will also be represented.

The senior White House official said, “I think that list of countries highlights just how pernicious and transnational and global the ransomware threat has been.”

Aside from government action, the Biden administration has called on private businesses, which most often are blindsided by the ransomware attacks, to modernize their cyber defenses to meet the threat.

Facebook-backed Group Launches Misinformation Adjudication Panel in Australia

A tech body backed by the Australian units of Facebook, Google and Twitter said on Monday it has set up an industry panel to adjudicate complaints over misinformation, a day after the government threatened tougher laws over false and defamatory online posts. 

Prime Minister Scott Morrison last week labeled social media “a coward’s palace,” while the government said on Sunday it was looking at measures to make social media companies more responsible, including forcing legal liability onto the platforms for the content published on them.   

The issue of damaging online posts has emerged as a second battlefront between Big Tech and Australia, which last year passed a law to make platforms pay license fees for content, sparking a temporary Facebook blackout in February.   

The Digital Industry Group Inc. (DIGI), which represents the Australian units of Facebook Inc., Alphabet’s Google and Twitter Inc., said its new misinformation oversight subcommittee showed the industry was willing to self-regulate against damaging posts. 

The tech giants had already agreed a code of conduct against misinformation, “and we wanted to further strengthen it with independent oversight from experts, and public accountability,” DIGI Managing Director Sunita Bose said in a statement. 

A three-person “independent complaints sub-committee” would seek to resolve complaints about possible breaches of the code conduct via a public website, DIGI said, but would not take complaints about individual posts.   

The industry’s code of conduct includes items such as taking action against misinformation affecting public health, which would include the novel coronavirus.   

DIGI, which also represents Apple Inc. and TikTok, said it could issue a public statement if a company was found to have violated the code of conduct or revoke its signatory status with the group. 

Reset Australia, an advocate group focused on the influence of technology on democracy, said the oversight panel was “laughable” as it involved no penalties and the code of conduct was optional. 

“DIGI’s code is not much more than a PR stunt given the negative PR surrounding Facebook in recent weeks,” said Reset Australia Director of tech policy Dhakshayini Sooriyakumaran in a statement, urging regulation for the industry. 

Facebook Unveils New Controls for Kids Using Its Platforms

Facebook, in the aftermath of damning testimony that its platforms harm children, will be introducing several features including prompting teens to take a break using its photo sharing app Instagram, and “nudging” teens if they are repeatedly looking at the same content that’s not conducive to their well-being.  

The Menlo Park, California-based Facebook is also planning to introduce new controls on an optional basis so that parents or guardians can supervise what their teens are doing online. These initiatives come after Facebook announced late last month that it was pausing work on its Instagram for Kids project. But critics say the plan lacks details, and they are skeptical that the new features would be effective.  

The new controls were outlined on Sunday by Nick Clegg, Facebook’s vice president for global affairs, who made the rounds on various Sunday news shows including CNN’s “State of the Union” and ABC’s “This Week with George Stephanopoulos” where he was grilled about Facebook’s use of algorithms as well as its role in spreading harmful misinformation ahead of the Jan. 6 Capitol riots. 

“We are constantly iterating in order to improve our products,” Clegg told Dana Bash on “State of the Union” Sunday. “We cannot, with a wave of the wand, make everyone’s life perfect. What we can do is improve our products, so that our products are as safe and as enjoyable to use.” 

Clegg said that Facebook has invested $13 billion over the past few years in making sure to keep the platform safe and that the company has 40,000 people working on these issues. And while Clegg said that Facebook has done its best to keep harmful content out of its platforms, he says he was open for more regulation and oversight.  

“We need greater transparency,” he told CNN’s Bash. He noted that the systems that Facebook has in place should be held to account, if necessary, by regulation so that “people can match what our systems say they’re supposed to do from what actually happens.” 

The flurry of interviews came after whistleblower Frances Haugen, a former data scientist with Facebook, went before Congress last week to accuse the social media platform of failing to make changes to Instagram after internal research showed apparent harm to some teens and of being dishonest in its public fight against hate and misinformation. Haugen’s accusations were supported by tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit. 

Josh Golin, executive director of Fairplay, a children’s digital advocacy group, said that he doesn’t think introducing controls to help parents supervise teens would be effective since many teens set up secret accounts. 

He was also dubious about how effective nudging teens to take a break or move away from harmful content would be. He noted Facebook needs to show exactly how they would implement it and offer research that shows these tools are effective.  

“There is tremendous reason to be skeptical,” he said. He added that regulators need to restrict what Facebook does with its algorithms.  

He said he also believes that Facebook should cancel its Instagram project for kids. 

When Clegg was grilled by both Bash and Stephanopoulos in separate interviews about the use of algorithms in amplifying misinformation ahead of Jan. 6 riots, he responded that if Facebook removed the algorithms people would see more, not less hate speech, and more, not less, misinformation.  

Clegg told both hosts that the algorithms serve as “giant spam filters.” 

Democratic Sen. Amy Klobuchar of Minnesota, who chairs the Senate Commerce Subcommittee on Competition Policy, Antitrust, and Consumer Rights, told Bash in a separate interview Sunday that it’s time to update children’s privacy laws and offer more transparency in the use of algorithms. 

“I appreciate that he is willing to talk about things, but I believe the time for conversation is done,” said Klobuchar, referring to Clegg’s plan. “The time for action is now.” 

Infrastructure Successes Have Transformed America, Can Biden’s Plan do the Same?

Congress appears poised to pass a bipartisan, $1 trillion plan that would be the largest federal investment in infrastructure in more than a decade. History shows that investing in infrastructure can transform the United States, changing how Americans move, bolstering economic prosperity, and significantly improving the health and quality of life for many. 

 

“When the transcontinental railroad was completed in 1869, we changed the way we moved forever, opening up the entire country and from the way humans had moved previously for thousands of years by animal to machine,” Greg DiLoreto, past president of the American Society of Civil Engineers (ASCE), told VOA via email. “[And] I think we all would agree that construction of the interstate highway system changed America in ways that greatly contributed to our economic prosperity.” 

In 1956, President Dwight D. Eisenhower signed the Federal-Aid Highway Act, which authorized the building of 65,000 kilometers (41,000 miles) of interstate highways — the largest American public works program in history at the time. Another earlier transformation occurred in 1936, when Congress passed the Rural Electrification Act, extending electricity into rural areas for the first time.

And the wave of projects that created modern sewage and water systems in urban areas in the late 19th and early 20th centuries left a lasting mark, providing reliable, clean water in cities and extracting pollution from sewage.

“American cities in the late 19th, early 20th century were incredibly unhealthy places,” says Richard White, professor emeritus of American history at Stanford University in California. “High child death rates, repeated epidemics, and much of that was waterborne disease that came from both ineffective sewage and impure water. And infrastructure projects changed that dramatically. Probably it’s been the most effective public health effort ever in the history of the United States.”

Dark consequences 

DiLoreto also names the construction of dams across the western United States, which increased America’s ability to farm and feed the world, as infrastructure successes. But he points out that the projects created problems for migrating fish. In fact, many of the so-called successful infrastructure projects, like interstate highways, had dark consequences. 

“They increased racial stratification in the cities. They were built in such a way that they went through poorer neighborhoods, very often minority neighborhoods, walling them off from the city as a whole,” White says. “They set them apart and set in motion a set of social changes which we suffer from still. So, they hurt poorer areas, minority areas, even if they helped middle-class areas.” 

White, who wrote the book “Railroaded,” about the building of the transcontinental railroads, contends the federal government funded too many railroads into areas without the traffic to sustain them. 

“The railroads took government money and then went bankrupt,” White says. “They were very often utterly corrupt. The money was taken off into the private pockets behind some of the great fortunes in American history, and they never really delivered the economic and social benefits that they promised.” 

And Native Americans ended up paying the price, White adds. 

“Many of these railroads ended up costing Indian peoples huge amounts of land for no particular benefit,” he says. “It’s not like white settlement was particularly successful in the land the Indians lost. So, even though it was intended to raise the standard of living for everybody in the West, it didn’t necessarily do so, and the great cost was paid very often by Indian people.” 

Bold enough?

The stripped-down bipartisan version of President Joe Biden’s American Jobs Plan (AJP) pours money into transportation, utilities — including high-speed internet for rural communities — and pollution cleanup. What the bill does not appear to contain is a single transformative project. 

“From the information I have, funds will be used to help us repair, replace and make our infrastructure more robust to withstand climate change and seismic risks,” DiLoreto says. “One might consider that transformative in the sense that our quality of life and economic prosperity depend on a functioning infrastructure.” 

White views the bill as backward-looking rather than forward-thinking at a time when the United States needs to transform itself to adjust to a changing world, doing things differently in the future than it has in the past. 

“We have our first great infrastructure bill, which is mostly intended to protect things we built in the past, which, I think, in the long run, that’s going to be seen as a failing,” White says. “And again, I’m not saying that you should allow bridges to fall into rivers, or that the roads don’t need repair. But it’s not transformative.” 

There is one potentially sweeping project that could help revolutionize life in the United States. 

“Broadband has had a tremendous impact on our lives,” DiLoreto says. “Without a broadband system, our ability to economically survive COVID would have been difficult.” 

The current bipartisan plan provides $65 billion for broadband infrastructure. 

“If broadband in this bill works as they intend it … and they bring it into poor areas which now lack broadband, that would be a good thing, that could be transformative,” White says. “That could have the same kind of consequences that rural electrification had in terms of education and lightening people’s workload and allowing them to do the kinds of work they otherwise couldn’t do. … But if they simply make it more effective for those who have it already, it’s not going to be transformative.”

Chinese Cyber Operations Scoop Up Data for Political, Economic Aims 

Mustang Panda is a Chinese hacking group that is suspected of attempting to infiltrate the Indonesian government last month.

The reported breach, which the Indonesians denied, fits the pattern of China’s recent cyberespionage campaigns. These attacks have been increasing over the past year, experts say, in search of social, economic and political intelligence from Asian countries and other nations across the globe.

“There’s been an upswing,” said Ben Read, director of cyberespionage analysis at Mandiant, a cybersecurity firm, in an interview with VOA. Cyber operations stemming from China are “pretty extensive campaigns that haven’t seemed to be restrained at all,” he said.

‘Large-scale and indiscriminate’

For years, China was considered the United States’ main cyber adversary, having coordinated teams both inside and outside the government conducting cyberespionage campaigns that were “large-scale and indiscriminate,” Josephine Wolff, an associate professor of cybersecurity policy at Tufts University, told VOA.

The 2014-15 hack on the U.S. Office of Personnel Management, in which the personnel records of 22 million federal workers were compromised, was a case in point — a “big grab,” she said.

After a 2015 cybersecurity agreement between then-U.S. President Barack Obama and Chinese President Xi Jinping, attacks from China declined, at least against the West, experts say.

Hacking rising with rhetoric

But as tensions rose between Beijing and Washington during the Trump presidency, Chinese cyberespionage also increased. Over the past year, experts have attributed notable hacks in the U.S., Europe and Asia to China’s Ministry of State Security, the nation’s civilian intelligence agency, which has taken the lead in Beijing’s cyberespionage, consolidating efforts by the People’s Liberation Army.

TAG-28, a Chinese state-sponsored hacking team focused on the Indian subcontinent, reportedly infiltrated targets that included the Indian government agency in charge of a database of biometric and digital identity information for more than 1 billion people, according to The Record, a media site focused on cybersecurity.

A Microsoft report released in October accuses the Chinese hacking group Chromium of targeting universities in Hong Kong and Taiwan and going after other countries’ governments and telecommunication providers.

Hafnium, the name Microsoft gave to a Chinese hacking group, was behind the Microsoft Exchange hack earlier this year, according to the company and the Biden administration. Chinese hacking teams, Microsoft reported, took advantage of a weakness in the software to grab what they could before an emergency patch could be issued.

Scooping up data

A National Public Radio investigation asserted that the Microsoft Exchange hack may have been, in part, an information scoop aimed at acquiring large amounts of data to train China’s artificial intelligence assets.

Hafnium also targets higher education, defense industry firms, think tanks, law firms and nongovernmental organizations, the Microsoft report said. Another group from China, Nickel — also known as APT15 and Vixen Panda — targets governments in Central and South America and Europe, Microsoft said.

“What you are seeing now is this realization that Chinese espionage never disappeared and has become more technologically sophisticated,” Wolff said.

White House response

The Biden administration has stepped up its response to Chinese hacking. Over the summer, the U.S. and its allies, including the European Union, NATO and the United Kingdom, accused China of being behind the Microsoft hack and called on Beijing to cease the activity.

The Biden administration has not indicted anyone related to the Microsoft Exchange hack, nor has it instituted economic or other sanctions against China.

However, the U.S. unsealed in July an indictment against four members of China’s Ministry of State Security in a separate attack conducted by a group that security researchers call Advanced Persistent Threat (APT) 40, Bronze, Mohawk and other names.

A Chinese government spokesman demanded that the U.S. drop the charges and denied the nation was behind the Microsoft Exchange hack.

“The United States ganged up with its allies to make unwarranted accusations against Chinese cybersecurity,” said Zhao Lijian, a Chinese Foreign Ministry spokesperson, in a July statement. “This was made up out of thin air and confused right and wrong. It is purely a smear and suppression with political motives.”

Pushing back

While China has stepped up its use of hacking, it has not crossed what some cyber experts say is a bright line in cyberespionage: public, overt hacks, such as the Russian disinformation campaign to influence the 2016 U.S. presidential election and, in May, the Colonial Pipeline ransomware hack, which was attributed to Russian-based cybercriminals.

China’s aims appear to be long term and both economic and strategic, such as shoring up its capabilities “so they are not only well defended but surpass capacities,” Philip Reiner, the CEO of the Institute for Security and Technology, told VOA.

A collective push from world leaders that cyberespionage is unacceptable might resonate with Chinese leaders in Beijing, who want to be accepted on the world stage, he said. Detailing clear consequences for state-sponsored hacks is also critical, he said.

Without a strong push from the U.S. and its allies, experts say, China’s state-sponsored cyberattacks will continue.

Facebook Messenger, Instagram Service Disrupted for Second Time in a Week

Facebook confirmed on Friday that some users were having trouble accessing its apps and services, days after the social media giant suffered a six-hour outage triggered by an error during routine maintenance on its network of data centers. 

Some users were unable to load their Instagram feeds, while others were not able to send messages on Facebook Messenger. 

“We’re aware that some people are having trouble accessing our apps and products. We’re working to get things back to normal as quickly as possible and we apologize for any inconvenience,” Facebook said in a tweet.

People swiftly took to Twitter to share memes about the second Instagram disruption this week. 

Web monitoring group Downdetector showed there were more than 36,000 incidents of people reporting issues with photo-sharing platform Instagram on Friday. There were also more than 800 reported issues with Facebook’s messaging platform. 

Downdetector only tracks outages by collating status reports from a series of sources, including user-submitted errors on its platform. The outage might have affected a larger number of users. 

The outage on Monday was the largest Downdetector had ever seen and blocked access to apps for billions of users of Facebook, Instagram and WhatsApp.

Americans Agree Misinformation Is a Problem, Poll Shows

Nearly all Americans agree that the rampant spread of misinformation is a problem.

Most also think social media companies, and the people that use them, bear a good deal of blame for the situation. But few are very concerned that they themselves might be responsible, according to a new poll from The Pearson Institute and the Associated Press-NORC Center for Public Affairs Research.

Ninety-five percent of Americans identified misinformation as a problem when they’re trying to access important information. About half put a great deal of blame on the U.S. government, and about three-quarters point to social media users and tech companies. Yet only 2 in 10 Americans say they’re very concerned that they have personally spread misinformation.

 

More — about 6 in 10 — are at least somewhat concerned that their friends or family members have been part of the problem.

For Carmen Speller, a 33-year-old graduate student in Lexington, Kentucky, the divisions are evident when she’s discussing the coronavirus pandemic with close family members. Speller trusts COVID-19 vaccines; her family does not. She believes the misinformation her family has seen on TV or read on questionable news sites has swayed them in their decision to stay unvaccinated against COVID-19.

In fact, some of her family members think she’s crazy for trusting the government for information about COVID-19.

“I do feel like they believe I’m misinformed. I’m the one that’s blindly following what the government is saying, that’s something I hear a lot,” Speller said. “It’s come to the point where it does create a lot of tension with my family and some of my friends as well.”

Speller isn’t the only one who may be having those disagreements with her family.

The survey found that 61% of Republicans say the U.S. government has a lot of responsibility for spreading misinformation, compared to just 38% of Democrats.

There’s more bipartisan agreement, however, about the role that social media companies, including Facebook, Twitter and YouTube, play in the spread of misinformation.

According to the poll, 79% of Republicans and 73% of Democrats said social media companies have a great deal or quite a bit of responsibility for misinformation.

And that type of rare partisan agreement among Americans could spell trouble for tech giants like Facebook, the largest and most profitable of the social media platforms, which is under fire from Republican and Democrat lawmakers alike.

“The AP-NORC poll is bad news for Facebook,” said Konstantin Sonin, a professor of public policy at the University of Chicago who is affiliated with the Pearson Institute. “It makes clear that assaulting Facebook is popular by a large margin — even when Congress is split 50-50, and each side has its own reasons.”

 

During a congressional hearing Tuesday, senators vowed to hit Facebook with new regulations after a whistleblower testified that the company’s own research shows its algorithms amplify misinformation and content that harms children.

“It has profited off spreading misinformation and disinformation and sowing hate,” Sen. Richard Blumenthal, D-Conn., said during a meeting of the Senate Commerce Subcommittee on Consumer Protection. Democrats and Republicans ended the hearing with acknowledgement that regulations must be introduced to change the way Facebook amplifies its content and targets users.

The poll also revealed that Americans are willing to blame just about everybody but themselves for spreading misinformation, with 53% of them saying they’re not concerned that they’ve spread misinformation.

“We see this a lot of times where people are very worried about misinformation but they think it’s something that happens to other people — other people get fooled by it, other people spread it,” said Lisa Fazio, a Vanderbilt University psychology professor who studies how false claims spread. “Most people don’t recognize their own role in it.”

Younger adults tend to be more concerned that they’ve shared falsehoods, with 25% of those ages 18 to 29 very or extremely worried that they have spread misinformation, compared to just 14% of adults ages 60 and older. Sixty-three percent of older adults are not concerned, compared with roughly half of other Americans.

Yet it’s older adults who should be more worried about spreading misinformation, given that research shows they’re more likely to share an article from a false news website, Fazio said.

Before she shares things with family or her friends on Facebook, Speller tries her best to make sure the information she’s passing on about important topics like COVID-19 has been peer-reviewed or comes from a credible medical institution. Still, Speller acknowledges there has to have been a time or two that she “liked” or hit “share” on a post that didn’t get all the facts quite right.

“I’m sure it has happened,” Speller said. “I tend to not share things on social media that I didn’t find on verified sites. I’m open to that if someone were to point out, ‘Hey this isn’t right,’ I would think, OK, let me check this.”

The AP-NORC poll of 1,071 adults was conducted Sept. 9-13 using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 3.9 percentage points.