All posts by MTechnology

Drone maker DJI sues Pentagon over Chinese military listing

WASHINGTON — China-based DJI sued the U.S. Defense Department on Friday for adding the drone maker to a list of companies allegedly working with Beijing’s military, saying the designation is wrong and has caused the company significant financial harm.

DJI, the world’s largest drone manufacturer that sells more than half of all U.S. commercial drones, asked a U.S. District Judge in Washington to order its removal from the Pentagon list designating it as a “Chinese military company,” saying it “is neither owned nor controlled by the Chinese military.”

Being placed on the list represents a warning to U.S. entities and companies about the national security risks of conducting business with them.

DJI’s lawsuit says because of the Defense Department’s “unlawful and misguided decision” it has “lost business deals, been stigmatized as a national security threat, and been banned from contracting with multiple federal government agencies.”

The company added “U.S. and international customers have terminated existing contracts with DJI and refuse to enter into new ones.”

The Defense Department did not immediately respond to a request for comment.

DJI said on Friday it filed the lawsuit after the Defense Department did not engage with the company over the designation for more than 16 months, saying it “had no alternative other than to seek relief in federal court.”

Amid strained ties between the world’s two biggest economies, the updated list is one of numerous actions Washington has taken in recent years to highlight and restrict Chinese companies that it says may strengthen Beijing’s military.

Many major Chinese firms are on the list, including aviation company AVIC, memory chip maker YMTC, China Mobile 0941.HK, and energy company CNOOC.

In May, lidar manufacturer Hesai Group ZN80y.F filed a suit challenging the Pentagon’s Chinese military designation for the company. On Wednesday, the Pentagon removed Hesai from the list but said it will immediately relist the China-based firm on national security grounds.

DJI is facing growing pressure in the United States.

Earlier this week DJI told Reuters that Customs and Border Protection is stopping imports of some DJI drones from entering the United States, citing the Uyghur Forced Labor Prevention Act.

DJI said no forced labor is involved at any stage of its manufacturing.

U.S. lawmakers have repeatedly raised concerns that DJI drones pose data transmission, surveillance and national security risks, something the company rejects.

Last month, the U.S. House voted to bar new drones from DJI from operating in the U.S. The bill awaits U.S. Senate action. The Commerce Department said last month it is seeking comments on whether to impose restrictions on Chinese drones that would effectively ban them in the U.S. — similar to proposed Chinese vehicle restrictions. 

Residents on Kenya’s coast use app to track migratory birds

The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.

Chinese cyber association calls for review of Intel products sold in China 

BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests. 

While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC). 

“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said. 

Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review. 

Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.  

 

Tech firms increasingly look to nuclear power for data center

As energy-hungry computer data centers and artificial intelligence programs place ever greater demands on the U.S. power grid, tech companies are looking to a technology that just a few years ago appeared ready to be phased out: nuclear energy. 

After several decades in which investment in new nuclear facilities in the U.S. had slowed to a crawl, tech giants Microsoft and Google have recently announced investments in the technology, aimed at securing a reliable source of emissions-free power for years into the future.  

Earlier this year, online retailer Amazon, which has an expansive cloud computing business, announced it had reached an agreement to purchase a nuclear energy-fueled data center in Pennsylvania and that it had plans to buy more in the future. 

However, the three companies’ strategies rely on somewhat different approaches to the problem of harnessing nuclear energy, and it remains unclear which, if any, will be successful. 

Energy demand 

Data centers, which concentrate thousands of powerful computers in one location, consume prodigious amounts of power, both to run the computers themselves and to operate the elaborate systems put in place to dissipate the large amount of heat they generate.  

A recent study by Goldman Sachs estimated that data centers currently consume between 1% and 2% of all available power generation. That percentage is expected to at least double by the end of the decade, even accounting for new power sources coming online. The study projected a 160% increase in data center power consumption by 2030. 

The U.S. Department of Energy has estimated that the largest data centers can consume more than 100 megawatts of electricity, or enough to power about 80,000 homes. 

Small, modular reactors 

Google’s plan is, in some ways, the most radical departure — both from the current structure of the energy grid and from traditional means of generating nuclear power. The internet search giant announced on Monday that it has partnered with Kairos Power to fund the construction of up to seven small-scale nuclear reactors that, across several locations, would combine to generate 500 megawatts of power. 

The small modular reactors (SMRs) are a new, and largely untested, technology. Unlike sprawling nuclear plants, SMRs are compact, requiring much less infrastructure to keep them operational and safe. 

“The smaller size and modular design can reduce construction timelines, allow deployment in more places, and make the final project delivery more predictable,” Google and Kairos said in a press release.  

The companies said they intend to have the first of the SMRs online by 2030, with the rest to follow by 2035. 

Great promise 

Sola Talabi, president of Pittsburgh Technical, a nuclear consulting firm, told VOA that SMR technology holds great promise for the future. He said that the plants’ small size will eliminate many of the safety concerns that larger reactors present. 

For example, some smaller reactors generate so much less heat than larger reactors that they can utilize “passive” cooling systems that are not susceptible to the kind of mechanical failures that caused disaster at Japan’s Fukushima plant in 2011 and the Soviet Union’s Chernobyl plant in 1986.  

Talabi, who is also an adjunct faculty member in nuclear engineering at the University of Pittsburgh and University of Michigan, said that SMRs’ modular nature will allow for rapid deployment and substantial cost savings as time goes on. 

“Pretty much every reactor that has been built [so far] has been built like it’s the first one,” he said. “But with these reactors, because we will be able to use the same processes, the same facilities, to produce them, we actually expect that we will be able to … achieve deployment scale relatively quickly.” 

Raising doubts 

Not all experts are convinced that SMRs are going to live up to expectations. 

Edwin Lyman, director of nuclear power safety for the Union of Concerned Scientists, told VOA that the Kairos reactors Google is hoping to install use a new technology that has never been tested under real-world conditions.

“At this point, it’s just hope without any real basis in experimental fact to believe that this is going to be a productive and reliable solution for the need to power data centers over the medium term,” he said. 

He pointed out that the large-scale deployment of new nuclear reactors will also result in the creation of a new source of nuclear waste, which the U.S. is still struggling to find a way to dispose of at scale.  

“I think what we’re seeing is really a bubble — a nuclear bubble — which I suspect is going to be deflated once these optimistic, hopeful agreements turn out to be much harder to execute,” Lyman said. 

Three Mile Island 

Microsoft and Amazon have plotted a more conventional path toward powering their data centers with nuclear energy. 

In its announcement last month, Microsoft revealed that it has reached an agreement with Constellation Energy to restart a mothballed nuclear reactor at Three Mile Island in Pennsylvania and to use the power it produces for its data operations. 

Three Mile Island is best known as the site of the worst nuclear disaster in U.S. history. In 1979, the site’s Unit 2 reactor suffered a malfunction that resulted in radioactive gases and iodine being released into the local environment.  

However, the facility’s Unit 1 reactor did not fail, and it operated safely for several decades. It was shut down in 2019, after cheap shale gas drove the price of energy down so far that it made further operations economically unfeasible. 

It is expected to cost $1.6 billion to bring the reactor back online, and Microsoft has agreed to fund that investment. It has also signed an agreement to purchase power from the facility for 20 years. The companies say they believe that they can bring the facility back online by 2028. 

Amazon’s plan, by contrast, does not require either new technology or the resurrection of an older nuclear facility. 

The data center that the company purchased from Talen Energy is located on the same site as the fully operational Susquehanna nuclear plant in Salem, Pennsylvania, and draws power directly from it. 

Amazon characterized the $650 million investment as part of a larger effort to reach net-zero carbon emissions by 2040. 

Report: Iran cyberattacks against Israel surge after Gaza war

Israel has become the top target of Iranian cyberattacks since the start of the Gaza war last year, while Tehran had focused primarily on the United States before the conflict, Microsoft said Tuesday.

“Following the outbreak of the Israel-Hamas war, Iran surged its cyber, influence, and cyber-enabled influence operations against Israel,” Microsoft said in an annual report.

“From October 7, 2023, to July 2024, nearly half of the Iranian operations Microsoft observed targeted Israeli companies,” said the Microsoft Digital Defense Report.

From July to October 2023, only 10 percent of Iranian cyberattacks targeted Israel, while 35 percent aimed at American entities and 20 percent at the United Arab Emirates, according to the US software giant.

Since the war started Iran has launched numerous social media operations with the aim of destabilizing Israel.

“Within two days of Hamas’ attack on Israel, Iran stood up several new influence operations,” Microsoft said.

An account called “Tears of War” impersonated Israeli activists critical of Prime Minister Benjamin Netanyahu’s handling of a crisis over scores of hostages taken by Hamas, according to the report.

An account called “KarMa”, created by an Iranian intelligence unit, claimed to represent Israelis calling for Netanyahu’s resignation. 

Iran also began impersonating partners after the war started, Microsoft said.

Iranian services created a Telegram account using the logo of the military wing of Hamas to spread false messages about the hostages in Gaza and threaten Israelis, Microsoft said. It was not clear if Iran acted with Hamas’s consent, it added.

“Iranian groups also expanded their cyber-enabled influence operations beyond Israel, with a focus on undermining international political, military, and economic support for Israel’s military operations,” the report said.

The Hamas terror attack on October 7, 2023, resulted in the deaths of 1,206 people, mostly civilians, according to an AFP tally of official Israeli figures, including hostages killed in captivity.  

Israel’s retaliatory military campaign in Gaza has killed 42,289 people, the majority civilians, according to the health ministry in the Hamas-run territory. The U.N. has described the figures as reliable. 

Africa’s farming future could include more digital solutions

NAIROBI, KENYA — More than 400 delegates and organizations working in Africa’s farming sector are in Nairobi, Kenya, this week to discuss how digital agriculture can improve the lives of farmers and the continent’s food system.

Tech innovators discussed the need for increased funding, especially for women.

In past decades, African farmers have struggled to produce enough food to feed the continent.

DigiCow is one of the tech companies at the conference that says it has answers to the problem. The Kenya-based company says it provides farmers with digital recordkeeping, education via audio on an app, and access to financing and marketing.

Maureen Saitoti, DigiCow’s brand manager, said the platform has improved the lives of at least half a million farmers.

“Other than access to finance, it is also able to offer access to the market because a farmer is able to predict the harvest they are anticipating and begin conversations with buyers who have also been on board on the platform,” she said. “So, this has proven to provide a wholesome integration of the ecosystem, supporting small-scale farmers.”

Integrating digital systems into food production helps farmers gain access to seed, fertilizer and loans, and helps prevent pests and diseases on farms, organizers said.

Innovation in agriculture technology is seen as helping reach marginalized groups, including women.

Sieka Gatabaki, program director for Mercy Corps AgriFin, which is in 40 countries working with digital tool providers to increase the productivity and incomes of small-scale farmers, said his organization stresses education and practical information.

“We also focus on agronomic advice that gives the farmers the right kind of skills and knowledge to execute on their farms, as well as precision information such as weather that enables them to make the right decisions [about] how they grow and when they should grow and what they should grow in different geomatic climates,” Gatabaki said.

“Then we definitely expect that those farmers will increase their productivity and income.”

According to the State of AgTech Investment Report 2024, farming attracted $1.6 billion in funding in the past decade. But experts say the current funding is not enough to meet the sector’s growing demands.

David Saunder, director of strategy and growth at Briter Bridges, says funding systems have evolved to cope with problems faced by farmers and the food industry.

“Funding follows those businesses, those startups, that can viably grow and scale their businesses, and that’s what we are trying to do with AgTech to increase the data and information on those,” he said.

During the meeting, tech developers, experts and donors will also discuss how artificial intelligence and alternative data could be used to improve productivity.

US lawmakers seek answers from telecoms on Chinese hacking report

WASHINGTON — A bipartisan group of U.S. lawmakers asked AT&T, Verizon Communications, and Lumen Technologies on Friday to answer questions after a report that Chinese hackers accessed the networks of U.S. broadband providers. 

The Wall Street Journal reported Saturday hackers obtained information from systems the federal government uses for court-authorized wiretapping, and said the three companies were among the telecoms whose networks were breached. 

House Energy and Commerce Committee Chair Cathy McMorris Rodgers, a Republican, and the top Democrat on the committee Representative Frank Pallone along with Representatives Bob Latta and Doris Matsui asked the three companies to answer questions. They are seeking a briefing and detailed answers by next Friday. 

“There is a growing concern regarding the cybersecurity vulnerabilities embedded in U.S. telecommunications networks,” the lawmakers said. They are asking for details on what information was seized and when the companies learned about the intrusion. 

AT&T and Lumen declined to comment, while Verizon did not immediately comment. 

It was unclear when the hack occurred. 

Hackers might have held access for months to network infrastructure used by the companies to cooperate with court-authorized U.S. requests for communications data, the Journal said. It said the hackers had also accessed other tranches of internet traffic. 

China’s foreign ministry said on Sunday that it was not aware of the attack described in the report but said the United States had “concocted a false narrative” to “frame” China in the past. 

US states sue TikTok, saying it harms young users

NEW YORK/WASHINGTON — TikTok faces new lawsuits filed by 13 U.S. states and the District of Columbia on Tuesday, accusing the popular social media platform of harming and failing to protect young people.

The lawsuits, filed separately in New York, California, the District of Columbia and 11 other states, expand Chinese-owned TikTok’s legal fight with U.S. regulators and seek new financial penalties against the company.

Washington is located in the District of Columbia.

The states accuse TikTok of using intentionally addictive software designed to keep children watching as long and often as possible and misrepresenting its content moderation effectiveness.

“TikTok cultivates social media addiction to boost corporate profits,” California Attorney General Rob Bonta said in a statement. “TikTok intentionally targets children because they know kids do not yet have the defenses or capacity to create healthy boundaries around addictive content.”

TikTok seeks to maximize the amount of time users spend on the app in order to target them with ads, the states said.

“Young people are struggling with their mental health because of addictive social media platforms like TikTok,” said New York Attorney General Letitia James.

TikTok said on Tuesday that it strongly disagreed with the claims, “many of which we believe to be inaccurate and misleading,” and that it was disappointed the states chose to sue “rather than work with us on constructive solutions to industrywide challenges.”

TikTok provides safety features that include default screentime limits and privacy defaults for minors under 16, the company said.

Washington, D.C., Attorney General Brian Schwalb alleged that TikTok operates an unlicensed money transmission business through its livestreaming and virtual currency features.

“TikTok’s platform is dangerous by design. It’s an intentionally addictive product that is designed to get young people addicted to their screens,” Schwalb said in an interview.

Washington’s lawsuit accused TikTok of facilitating sexual exploitation of underage users, saying TikTok’s livestreaming and virtual currency “operate like a virtual strip club with no age restrictions.”

Illinois, Kentucky, Louisiana, Massachusetts, Mississippi, New Jersey, North Carolina, Oregon, South Carolina, Vermont and Washington state also sued on Tuesday.

In March 2022, eight states, including California and Massachusetts, said they launched a nationwide probe of TikTok impacts on young people.

The U.S. Justice Department sued TikTok in August for allegedly failing to protect children’s privacy on the app. Other states, including Utah and Texas, previously sued TikTok for failing to protect children from harm. TikTok on Monday rejected the allegations in a court filing.

TikTok’s Chinese parent company, ByteDance, is battling a U.S. law that could ban the app in the United States.

China-connected spamouflage networks spread antisemitic disinformation

washington — Spamouflage networks with connections to China are posting antisemitic conspiracy theories on social media, casting doubt on Washington’s independence from alleged Jewish influence and the integrity of the two U.S. presidential candidates, a joint investigation by VOA Mandarin and Taiwan’s Doublethink Lab, a social media analytics firm, has found.

The investigation has so far uncovered more than 30 such X posts, many of which claim or suggest that core American political institutions, including the White House and Congress, have pledged loyalty to or are controlled by Jewish elites and the Israeli government.

One post shows a graphic of 18 U.S. officials of Jewish descent, including Secretary of State Antony Blinken, Treasury Secretary Janet Yellen, and the head of the Homeland Security Department, Alejandro Mayorkas, and asks: “Jews only make up 2% of the U.S. population, so why do they have so many representatives in important government departments?!”

Another post shows a cartoon depicting Vice President Kamala Harris, the Democratic candidate for president, and her opponent, Donald Trump, having their tongues tangled together and wrapped around an Israeli flagpole. The post proclaims that “no matter who of them comes to power, they will not change their stance on Judaism.”

Most of the 32 posts analyzed by VOA Mandarin and Doublethink Lab were posted during July and August. The posts came from three spamouflage accounts, two of which were previously reported by VOA.

Each of the three accounts leads its own spamouflage network. The three networks consist of 140 accounts, which amplify content from the three main accounts, or seeders.

A spamouflage network is a state-sponsored operation disguised as the work of authentic social media users to spread pro-government narratives and disinformation while discrediting criticism from adversaries.

Jasper Hewitt, a digital intelligence analyst at Doublethink Lab, told VOA Mandarin that the impact of these antisemitic posts has been limited, as most of them failed to reach real users, despite having garnered over 160,000 views.

U.S. officials have cast China as one of the major threats looking to disrupt this year’s election. Beijing, however, has repeatedly denied these allegations and urged Washington to “not make an issue of China in the election.”

Tuvia Gering, a nonresident fellow at the Atlantic Council’s Global China Hub, has closely followed antisemitic disinformation coming from China. He told VOA Mandarin that Beijing isn’t necessarily hostile toward Jews, but anti-Semitic conspiracy theories have historically been a handy tool to be used against Western countries.

“You can trace its origins back to the Cold War, when the Soviet Union promoted antisemitic conspiracy theories all over the world just to instigate in Western societies,” Gering said, “because it divides them from within and it casts the West in a bad light in a strategic competition. [It’s] the same thing you see here [with China].”

Anti-Semitic speech floods Chinese internet

Similar antisemitic narratives about U.S. politics posted by the spamouflage accounts have long been flourishing on the Chinese internet.

An article that received thousands of likes and reposts on Chinese social media app WeChat claims that “Jewish capital” has completed its control of the American political sphere “through infiltration, marriages, campaign funds and lobbying.”

The article also brings up the Jewish heritage of many current and former U.S. officials and their families as evidence of the alleged Jewish takeover of America.

“The wife of the U.S. president is Jewish, the son-in-law of the former U.S. president is Jewish, the mother of the previous former U.S. president was Jewish, the U.S. Secretary of State is Jewish, the U.S. Secretary of Treasury is Jewish, the Deputy Secretary of State, the Attorney General … are all Jewish,” it wrote.

In fact, first lady Jill Biden is Roman Catholic, and the mother of former President Barack Obama was raised as a Christian. The others named are Jewish.

Conspiracy theories and misinformation abounded on the Chinese internet after the U.S. House of Representatives passed a bill in May that would empower the Department of Education to adopt a new set of standards when investigating antisemitism in educational programs.

Articles and videos assert that the bill marks the death of America because it “definitively solidifies the superior and unquestionable position of the Jews in America,” claiming falsely that anyone who’s labeled an antisemite will be arrested.

One video with more than 1 million views claimed that the New Testament of the Bible would be deemed illegal under the bill. And since all U.S. presidents took their inaugural oath with the Bible, the bill allegedly invalidates the legitimacy of the commander in chief. None of that is true.

The Chinese public hasn’t historically been hostile toward Jews. A 2014 survey published by the Anti-Defamation League, a U.S.-based group against antisemitism, found that only 20% of the participants from China harbored an antisemitic attitude.

But when the Israel-Hamas conflict broke out a year ago, the otherwise heavily censored Chinese social media was flooded with antisemitic comments and praise for Nazi Germany leader Adolf Hitler.

The Chinese government has dismissed criticism of antisemitism on its internet. When asked about it at a news conference last year, Wang Wenbin, then the spokesperson of the Foreign Ministry, said that “China’s laws unequivocally prohibit disseminating information on extremism, ethnic hatred, discrimination and violence via the internet.”

But online hate speech against Jews has hardly disappeared. Eric Liu, a former censor for Chinese social media Weibo who now monitors online censorship, told VOA Mandarin that whenever Israel is in the news, there would be a surge in online antisemitism.

Just last month, after dozens of members of the Lebanon-based militant group Hezbollah were killed by explosions of their pagers, Chinese online commentators acidly condemned Israel and Jews.

The attack “proves that Jews are the most terrifying and cowardly people,” one Weibo user wrote. “They are self-centered and believe themselves to be superior, when in fact they are considered the most indecent and shameless. When the time comes, it’s going to be blood for blood.”

Australia’s online dating industry agrees to code of conduct to protect users

MELBOURNE, Australia — A code of conduct will be enforced on the online dating industry to better protect Australian users after research found that three-in-four people suffer some form of sexual violence through the platforms, Australia’s government said on Tuesday.

Bumble, Grindr and Match Group Inc., a Texas-based company that owns platforms including Tinder, Hinge, OKCupid and Plenty of Fish, have agreed to the code that took effect on Tuesday, Communications Minister Michelle Rowland said.

The platforms, which account for 75% of the industry in Australia, have until April 1 to implement the changes before they are strictly enforced, Rowland said.

The code requires the platforms’ systems to detect potential incidents of online-enabled harm and demands that the accounts of some offenders are terminated.

Complaint and reporting mechanisms are to be made prominent and transparent. A new rating system will show users how well platforms are meeting their obligations under the code.

The government called for a code of conduct last year after the Australian Institute of Criminology research found that three-in-four users of dating apps or websites had experienced some form of sexual violence through these platforms in the five years through 2021.

“There needs to be a complaint-handling process. This is a pretty basic feature that Australians would have expected in the first place,” Rowland said on Tuesday.

“If there are grounds to ban a particular individual from utilizing one of those platforms, if they’re banned on one platform, they’re blocked on all platforms,” she added.

Match Group said it had already introduced new safety features on Tinder, including photo and identification verification to prevent bad actors from accessing the platform while giving users more confidence in the authenticity of their connections.

The platform used artificial intelligence to issue real-time warnings about potentially offensive language in an opening line and advising users to pause before sending.

“This is a pervasive issue, and we take our responsibility to help keep users safe on our platform very seriously,” Match Group said in a statement on Wednesday.

Match Group said it would continue to collaborate with the government and the industry to “help make dating safer for all Australians.”

Bumble said it shared the government’s hope of eliminating gender-based violence and was grateful for the opportunity to work with the government and industry on what the platform described as a “world-first dating code of practice.”

“We know that domestic and sexual violence is an enormous problem in Australia, and that women, members of LGBTQ+ communities, and First Nations are the most at risk,” a Bumble statement said.

“Bumble puts women’s experiences at the center of our mission to create a world where all relationships are healthy and equitable, and safety has been central to our mission from day one,” Bumble added.

Grindr said in a statement it was “honored to participate in the development of the code and shares the Australian government’s commitment to online safety.”

All the platforms helped design the code.

Platforms that have not signed up include Happn, Coffee Meets Bagel and Feeld.

The government expects the code will enable Australians to make better informed choices about which dating apps are best equipped to provide a safe dating experience.

The government has also warned the online dating industry that it will legislate if the operators fail to keep Australians safe on their platforms.

Arkansas sues YouTube over claims it’s fueling mental health crisis

little rock, arkansas — Arkansas sued YouTube and parent company Alphabet on Monday, saying the video-sharing platform is made deliberately addictive and fueling a mental health crisis among youth in the state.

Attorney General Tim Griffin’s office filed the lawsuit in state court, accusing them of violating the state’s deceptive trade practices and public nuisance laws. The lawsuit claims the site is addictive and has resulted in the state spending millions on expanded mental health and other services for young people.

“YouTube amplifies harmful material, doses users with dopamine hits, and drives youth engagement and advertising revenue,” the lawsuit said. “As a result, youth mental health problems have advanced in lockstep with the growth of social media, and in particular, YouTube.”

Alphabet’s Google, which owns the video service and is also named as a defendant in the case, denied the lawsuit’s claims.

“Providing young people with a safer, healthier experience has always been core to our work. In collaboration with youth, mental health and parenting experts, we built services and policies to provide young people with age-appropriate experiences, and parents with robust controls,” Google spokesperson Jose Castaneda said in a statement. “The allegations in this complaint are simply not true.”

YouTube requires users under 17 to get their parent’s permission before using the site, while accounts for users younger than 13 must be linked to a parental account. But it is possible to watch YouTube without an account, and kids can easily lie about their age.

The lawsuit is the latest in an ongoing push by state and federal lawmakers to highlight the impact that social media sites have on younger users. U.S. Surgeon General Vivek Murthy in June called on Congress to require warning labels on social media platforms about their effects on young people’s lives, like those now mandatory on cigarette boxes.

Arkansas last year filed similar lawsuits against TikTok and Facebook parent company Meta, claiming the social media companies were misleading consumers about the safety of children on their platforms and protections of users’ private data. Those lawsuits are still pending in state court.

Arkansas also enacted a law requiring parental consent for minors to create new social media accounts, though that measure has been blocked by a federal judge.

Along with TikTok, YouTube is one of the most popular sites for children and teens. Both sites have been questioned in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm.

YouTube in June changed its policies about firearm videos, prohibiting any videos demonstrating how to remove firearm safety devices. Under the new policies, videos showing homemade guns, automatic weapons and certain firearm accessories like silencers will be restricted to users 18 and older.

Arkansas’ lawsuit claims that YouTube’s algorithms steer youth to harmful adult content, and that it facilitates the spread of child sexual abuse material.

The lawsuit doesn’t seek specific damages, but asks that YouTube be ordered to fund prevention, education and treatment for “excessive and problematic use of social media.”

California governor vetoes bill to create first-in-nation AI safety measures

Sacramento, California — California Governor Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.

The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

Earlier in September, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

The legislation is among a host of bills passed by the legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take action this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated that AI developers follow requirements similar to those commitments, said the measure’s supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline in August. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier in September, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”

Brazil imposes new fine, demands payments before letting X resume

SAO PAULO/BRASILIA BRAZIL — Brazil’s Supreme Court said on Friday that social platform X still needs to pay just over $5 million in pending fines, including a new one, before it will be allowed to resume its service in the country, according to a court document. 

Earlier this week, the Elon Musk-owned U.S. firm told the court it had complied with orders to stop the spread of misinformation and asked it to lift a ban on the platform. 

But Judge Alexandre de Moraes responded on Friday with a ruling that X and its legal representative in Brazil must still agree to pay a total of $3.4 million in pending fines that were previously ordered by the court. 

In his decision, the judge said that the court can use resources already frozen from X and Starlink accounts in Brazil, but to do so the satellite company, also owned by Musk, had to drop its pending appeal against the fund blockage.  

The judge also demanded a new $1.8 million fine related to a brief period last week when X became available again for some users in Brazil. 

X, formerly known as Twitter, did not immediately respond to a request for comment. 

According to a person close to X, the tech firm will likely pay all the fines but will consider challenging the fine that was imposed by the court after the platform ban.  

X has been suspended since late August in Brazil, one of its largest and most coveted markets, after Moraes ruled it had failed to comply with orders related to restricting hate speech and naming a local legal representative. 

Musk, who had denounced the orders as censorship and called Moraes a “dictator,” backed down and started to reverse his position last week, when X lawyers said the platform tapped a local representative and would comply with court rulings. 

In Friday’s decision, Moraes said that X had proved it had now blocked accounts as ordered by the court and had named the required legal representative in Brazil. 

CrowdStrike executive apologizes to Congress for July global tech outage

WASHINGTON — An executive at cybersecurity company CrowdStrike apologized in testimony to Congress for sparking a global technology outage over the summer. 

“We let our customers down,” said Adam Meyers, who leads CrowdStrike’s threat intelligence division, in a hearing before a U.S. House cybersecurity subcommittee Tuesday. 

Austin, Texas-based CrowdStrike has blamed a bug in an update that allowed its cybersecurity systems to push bad data out to millions of customer computers, setting off a global tech outage in July that grounded flights, took TV broadcasts off air and disrupted banks, hospitals and retailers. 

“Everywhere Americans turned, basic societal functions were unavailable,” House Homeland Security Committee Chairman Mark Green said. “We cannot allow a mistake of this magnitude to happen again.” 

The Tennessee Republican likened the impact of the outage to an attack “we would expect to be carefully executed by a malicious and sophisticated nation-state actor.” 

“We’re deeply sorry and we are determined to prevent this from ever happening again,” Meyers told lawmakers while laying out the technical missteps that led to the outage of about 8.5 million computers running Microsoft’s Windows operating system. 

Meyers said he wanted to “underscore that this was not a cyberattack” but was, instead, caused by a faulty “rapid-response content update” focused on addressing new threats. The company has since bolstered its content update procedures, he said. 

The company still faces a number of lawsuits from people and businesses that were caught up in July’s mass outage. 

Former executive gets 2 years in prison for role in FTX fraud

new york — Caroline Ellison, a former top executive in Sam Bankman-Fried’s fallen FTX cryptocurrency empire, was sentenced to two years in prison on Tuesday after she apologized repeatedly to everyone hurt by a fraud that stole billions of dollars from investors, lenders and customers. 

U.S. District Judge Lewis A. Kaplan said Ellison’s cooperation was “very, very substantial” and “remarkable.” 

But he said a prison sentence was necessary because she had participated in what might be the “greatest financial fraud ever perpetrated in this country and probably anywhere else” or at least close to it. 

He said in such a serious case, he could not let cooperation be a get-out-of-jail-free card, even when it was clear that Bankman-Fried had become “your kryptonite.” 

“I’ve seen a lot of cooperators in 30 years here,” he said. “I’ve never seen one quite like Ms. Ellison.”

She was ordered to report to prison on November 7. 

Ellison, 29, pleaded guilty nearly two years ago and testified against Bankman-Fried for nearly three days at a trial last November. 

At sentencing, she emotionally apologized to anyone hurt by the fraud that stretched from 2017 through 2022. 

“I’m deeply ashamed with what I’ve done,” she said, fighting through tears to say she was “so so sorry” to everyone she had harmed directly or indirectly. 

She did not speak as she left Manhattan federal court, surrounded by lawyers. 

In a court filing, prosecutors had called her testimony the “cornerstone of the trial” against Bankman-Fried, 32, who was found guilty of fraud and sentenced to 25 years in prison. 

In court Tuesday, Assistant U.S. Attorney Danielle Sassoon called for leniency, saying her testimony was “devastating and powerful proof” against Bankman-Fried. 

The prosecutor said Ellison’s time on the witness stand was very different from Bankman-Fried, who she said was “evasive, even contemptuous, and unable to answer questions directly” when he testified. 

Attorney Anjan Sahni asked the judge to spare his client from prison, citing “unusual circumstances,” including her off-and-on romantic relationship with Bankman-Fried and the damage caused when her “whole professional and personal life came to revolve” around him. 

FTX was one of the world’s most popular cryptocurrency exchanges, known for its Superbowl TV ad and its extensive lobbying campaign in Washington before it collapsed in 2022. 

U.S. prosecutors accused Bankman-Fried and other executives of looting customer accounts on the exchange to make risky investments, make millions of dollars of illegal political donations, bribe Chinese officials, and buy luxury real estate in the Caribbean. 

Ellison was chief executive at Alameda Research, a cryptocurrency hedge fund controlled by Bankman-Fried that was used to process some customer funds from FTX. 

As the business began to falter, Ellison divulged the massive fraud to employees who worked for her even before FTX filed for bankruptcy, trial evidence showed. 

Ultimately, she also spoke extensively with criminal and civil U.S. investigators. 

Sassoon said prosecutors were impressed that Ellison did not “jump into the lifeboat” to escape her crimes but instead spent nearly two years fully cooperating. 

Since testifying at Bankman-Fried’s trial, Ellison has engaged in extensive charity work, written a novel, and worked with her parents on a math enrichment textbook for advanced high school students, according to her lawyers. 

They said she also now has a healthy romantic relationship and has reconnected with high school friends she had lost touch with while she worked for and sometimes dated Bankman-Fried from 2017 until late 2022. 

Biden administration seeks to ban Chinese, Russian tech in most US vehicles

New York — The U.S. Commerce Department said Monday it’s seeking a ban on the sale of connected and autonomous vehicles in the U.S. that are equipped with Chinese and Russian software and hardware with the stated goal of protecting national security and U.S. drivers.

While there is minimal Chinese and Russian software deployed in the U.S, the issue is more complicated for hardware. That’s why Commerce officials said the prohibitions on the software would take effect for the 2027 model year and the prohibitions on hardware would take effect for the model year of 2030, or Jan. 1, 2029, for units without a model year.

The measure announced Monday is proactive but critical, the agency said, given that all the bells and whistles in cars like microphones, cameras, GPS tracking and Bluetooth technology could make Americans more vulnerable to bad actors and potentially expose personal information, from the home address of drivers, to where their children go to school.

In extreme situations, a foreign adversary could shut down or take simultaneous control of multiple vehicles operating in the United States, causing crashes and blocking roads, U.S. Secretary of Commerce Gina Raimondo told reporters on a call Sunday.

“This is not about trade or economic advantage,” Raimondo said. “This is a strictly national security action. The good news is right now, we don’t have many Chinese or Russian cars on our road.”

But Raimondo said Europe and other regions in the world where Chinese vehicles have become commonplace very quickly should serve as “a cautionary tale” for the U.S.

Security concerns around the extensive software-driven functions in Chinese vehicles have arisen in Europe, where Chinese electric cars have rapidly gained market share.

“Who controls these data flows and software updates is a far from trivial question, the answers to which encroach on matters of national security, cybersecurity, and individual privacy,” Janka Oertel, director of the Asia program at the European Council on Foreign Relations, wrote on the council’s website.

Vehicles are now “mobility platforms” that monitor driver and passenger behavior and track their surroundings.

A senior administration official said that it is clear from terms of service contracts included with the technology that data from vehicles ends up in China.

Raimondo said that the U.S. won’t wait until its roads are populated with Chinese or Russian cars.

“We’re issuing a proposed rule to address these new national security threats before suppliers, automakers and car components linked to China or Russia become commonplace and widespread in the U.S. automotive sector,” Raimondo said.

It is difficult to know when China could reach that level of saturation, a senior administration official said, but the Commerce Department says China hopes to enter the U.S. market and several Chinese companies have already announced plans to enter the automotive software space.

The Commerce Department added Russia to the regulations since the country is trying to “breathe new life into its auto industry,” senior administration officials said on the call.

The proposed rule would prohibit the import and sale of vehicles with Russia and China-manufactured software and hardware that would allow the vehicle to communicate externally through Bluetooth, cellular, satellite or Wi-Fi modules. It would also prohibit the sale or import of software components made in Russia or the People’s Republic of China that collectively allow a highly autonomous vehicle to operate without a driver behind the wheel. The ban would include vehicles made in the U.S. using Chinese and Russian technology.

The proposed rule would apply to all vehicles, but would exclude those not used on public roads, such as agricultural or mining vehicles.

U.S. automakers said they share the government’s national security goal, but at present there is little connected vehicle hardware or software coming to the U.S. supply chain from China.

Yet the Alliance for Automotive Innovation, a large industry group, said the new rules will make some automakers scramble for new parts suppliers. “You can’t just flip a switch and change the world’s most complex supply chain overnight,” John Bozzella, the alliance’s CEO, said in a statement.

The lead time in the new rules will be long enough for some automakers to make the changes, “but may be too short for others,” Bozzella said.

Commerce officials met with all the major auto companies around the world while it drafted the proposed rule to better understand supply chain networks, according to senior administration officials, and also met with a variety of industry associations.

The Commerce Department is inviting public comments, which are due 30 days after publication of a rule before it’s finalized. That should happen by the end of the Biden Administration.

The new rule follows steps taken earlier this month by the Biden administration to crack down on cheap products sold out of China, including electric vehicles, expanding a push to reduce U.S. dependence on Beijing and bolster homegrown industry.

US to propose ban on Chinese software, hardware in connected vehicles, sources say

Washington — The U.S. Commerce Department is expected on Monday to propose prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns, two sources told Reuters.

The Biden administration has raised serious concerns about the collection of data by Chinese companies on U.S. drivers and infrastructure as well as the potential foreign manipulation of vehicles connected to the internet and navigation systems.

The proposed regulation would ban the import and sale of vehicles from China with key communications or automated driving system software or hardware, said the two sources, who declined to be identified because the decision had not been publicly disclosed.

The move is a significant escalation in the United States’ ongoing restrictions on Chinese vehicles, software and components. Last week, the Biden administration locked in steep tariff hikes on Chinese imports, including a 100% duty on electric vehicles as well as new hikes on EV batteries and key minerals.

Commerce Secretary Gina Raimondo said in May the risks of Chinese software or hardware in connected U.S. vehicles were significant.

“You can imagine the most catastrophic outcome theoretically if you had a couple million cars on the road and the software were disabled,” she said.

President Joe Biden in February ordered an investigation into whether Chinese vehicle imports pose national security risks over connected-car technology — and if that software and hardware should be banned in all vehicles on U.S. roads.

“China’s policies could flood our market with its vehicles, posing risks to our national security,” Biden said earlier. “I’m not going to let that happen on my watch.”

The Commerce Department plans to give the public 30 days to comment before any finalization of the rules, the sources said. Nearly all newer vehicles on U.S. roads are considered “connected.” Such vehicles have onboard network hardware that allows internet access, allowing them to share data with devices both inside and outside the vehicle.

The department also plans to propose making the prohibitions on software effective in the 2027 model year and the ban on hardware would take effect in January 2029 or the 2030 model year. The prohibitions in question would include vehicles with certain Bluetooth, satellite and wireless features as well as highly autonomous vehicles that could operate without a driver behind the wheel.

A bipartisan group of U.S. lawmakers in November raised alarm about Chinese auto and tech companies collecting and handling sensitive data while testing autonomous vehicles in the United States.

The prohibitions would extend to other foreign U.S. adversaries, including Russia, the sources said.

A trade group representing major automakers including General Motors, Toyota Motor, Volkswagen, Hyundai and others had warned that changing hardware and software would take time.

The carmakers noted their systems “undergo extensive pre-production engineering, testing, and validation processes and, in general, cannot be easily swapped with systems or components from a different supplier.”

The Commerce Department declined to comment on Saturday. Reuters first reported, in early August, details of a plan that would have the effect of barring the testing of autonomous vehicles by Chinese automakers on U.S. roads. There are relatively few Chinese-made light-duty vehicles imported into the United States.

The White House on Thursday signed off on the final proposal, according to a government website. The rule is aimed at ensuring the security of the supply chain for U.S. connected vehicles. It will apply to all vehicles on U.S. roads, but not for agriculture or mining vehicles, the sources said.

Biden noted that most cars are connected like smartphones on wheels, linked to phones, navigation systems, critical infrastructure and to the companies that made them.

California governor signs law to protect children from social media addiction

SACRAMENTO, California — California will make it illegal for social media platforms to knowingly provide addictive feeds to children without parental consent beginning in 2027 under a new law Governor Gavin Newsom signed Friday. 

California follows New York state, which passed a law earlier this year allowing parents to block their kids from getting social media posts suggested by a platform’s algorithm. Utah has passed laws in recent years aimed at limiting children’s access to social media, but those have faced challenges in court. 

The California law will take effect in a state home to some of the largest technology companies in the world. Similar proposals have failed to pass in recent years, but Newsom signed a first-in-the-nation law in 2022 barring online platforms from using users’ personal information in ways that could harm children. 

It is part of a growing push in states across the country to try to address the impact of social media on the well-being of children. 

“Every parent knows the harm social media addiction can inflict on their children — isolation from human contact, stress and anxiety, and endless hours wasted late into the night,” Newsom, a Democrat, said in a statement. “With this bill, California is helping protect children and teenagers from purposely designed features that feed these destructive habits.” 

The law bans platforms from sending notifications without permission from parents to minors between midnight and 6 a.m., and between 8 a.m. and 3 p.m. on weekdays from September through May, when children are typically in school. The legislation also makes platforms set children’s accounts to private by default. 

Opponents of the legislation say it could inadvertently prevent adults from accessing content if they cannot verify their age. Some argue it would threaten online privacy by making platforms collect more information on users. 

The law defines an “addictive feed” as a website or app “in which multiple pieces of media generated or shared by users are, either concurrently or sequentially, recommended, selected, or prioritized for display to a user based, in whole or in part, on information provided by the user, or otherwise associated with the user or the user’s device,” with some exceptions. 

The subject garnered renewed attention in June when U.S. Surgeon General Vivek Murthy called on Congress to require warning labels on social media platforms and their impacts on young people. Attorneys general in 42 states endorsed the plan in a letter sent to Congress last week. 

State Senator Nancy Skinner, a Democrat representing Berkeley who wrote the California law, said that “social media companies have designed their platforms to addict users, especially our kids.” 

“With the passage of SB 976, the California Legislature has sent a clear message: When social media companies won’t act, it’s our responsibility to protect our kids,” she said in a statement.

China-connected spamouflage impersonated Dutch cartoonist

Washington — Based on the posts of an X account that bears the name of Dutch cartoonist Bart van Leeuwen, a profile picture of his face and short professional bio, one would think the Amsterdam-based artist is a staunch supporter of China and fierce critic of the United States.

In one post, the account blasts what it calls Washington’s “fallacies against the Chinese economy,” accompanied by a cartoon from the Global Times — a Beijing-controlled media outlet — showing Uncle Sam aiming but failing to hit a target emblazoned with the words “China’s economy.”

In another, the account reposts a Chinese propaganda video about the country’s rubber-stamp legislature, writing “today’s China is closely connected with the world, blending with each other, and achieving mutual success.”

But Van Leeuwen didn’t make the posts. In fact, this account doesn’t even belong to him.

It belongs to a China-connected network on X of “spamouflage” accounts, which pretend to be the work of real people but are in reality controlled by robots sending out messages designed to shape public opinion.

China has repeatedly rejected reports that it seeks to influence U.S. presidential elections, describing such claims as “fabricated.”

VOA Mandarin and DoubleThink Lab (DTL), a Taiwanese social media analytics firm, uncovered the fake Van Leeuwen account during a joint investigation into a network of spamouflage accounts working on behalf of the Chinese government.

The network, consisting of at least nine accounts, propagated Beijing’s talking points on issues including human rights abuses in China’s western Xinjiang province, territorial disputes with countries in the South China Sea and U.S. tariffs on Chinese goods.

Fake account contradicts real artist

Van Leeuwen confirmed in an interview with VOA Mandarin that he had nothing to do with and was not aware of the fake account.

“It’s ironic that my identity, being a political cartoonist, is being used for political propaganda,” he told VOA in a written statement.

The real Van Leeuwen is an award-winning cartoonist whose works have been published on news outlets around the world, such as the Las Vegas Review-Journal, the Korea Times, Sing Tao Daily in Hong Kong and Gulf Today in the United Arab Emirates.

He specializes in editorial cartoons, whose main subjects include global politics, elections in the U.S. and Russia’s invasion of Ukraine. Several of his past illustrations made fun of Chinese leader Xi Jinping’s economic policies and the opaqueness of Beijing’s inner political struggles.

After being contacted by VOA Mandarin, a spokesman from X said the fake account has been suspended.

Other than finding irony in being impersonated by a Chinese propaganda bot, Van Leeuwen said the incident also worries him.

“This example once again highlights the need for far-reaching measures regarding the restriction of social media,” Van Leeuwen wrote in his statement, “especially with irresponsible people like Elon Musk at the helm.”

After purchasing what was then called Twitter in 2022, the Tesla and SpaceX CEO vowed to reduce the prevalence of bots on the platform, but many users complain it has become even worse.

Musk, the world’s richest person, is a so-called “free speech absolutist,” opposing almost all censorship of people voicing their views. Critics say his policy allows racist and false information to flourish on X.

Former President Donald Trump has praised Musk’s business acumen and said he plans to have the man who may become the world’s first trillionaire head a commission on government efficiency if he is reelected in November.

Network of spamouflage accounts

Before its suspension, the X account that impersonated Van Leeuwen had close to 1,000 followers, more than Van Leeuwen’s real X account. It was registered in 2013, but its first post came only last year. The account’s early posts were mostly encouraging and inspiring words in Chinese. It also posted many dance videos.

Gradually, the account started to mix in more and more political narratives, criticizing the U.S. and defending China. It often reposted content from another spamouflage account called “Grey World.”

“Grey World” used a photo of an attractive Asian woman as its profile picture. Most of its posts were supportive of Beijing’s talking points. It regularly posted videos and cartoons from Chinese state media. It also posted several of Van Leeuwen’s cartoons about American politics.

VOA Mandarin and DTL’s investigation identified “Grey World” as the main spamouflage account in a network of nine such accounts. Other accounts in the network, including the fake Van Leeuwen account, amplified “Grey World” by reposting its content.

But posts from “Grey World” had limited reach on X, despite having tens of thousands of followers. For example, between August 18 and September 1, its most popular post, a diatribe against Washington’s Indo-Pacific strategy, was viewed a little over 10,000 times but only had 35 reposts and 65 likes.

After the suspension of the fake Van Leeuwen account, X also shut down the “Grey World” account.

The spamouflage network is not the first linked to China.

In April, British researchers released a report saying Chinese nationalist trolls were posing as American supporters of Trump on X to try to exploit domestic divisions ahead of the U.S. election.

U.S. federal prosecutors in 2023 accused China’s Ministry of Public Security of having a covert social media propaganda campaign that also aimed to influence U.S. elections.

Researchers at Facebook’s parent company Meta said it was the largest known covert propaganda operation ever identified on that platform and Instagram, reported Rolling Stone magazine.

Network analysis firm Graphika called the pro-Chinese network “Spamouflage Dragon,” part of a campaign it identified in early 2020 that was at the time posting content that praised Beijing’s policies and attacked those of then-President Trump.