Category Archives: Technology

Silicon valley & technology news. Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life

Huawei Phone Kicks off Debate About US Chip Restrictions

It started with an image of U.S. Commerce Secretary Gina Raimondo on her China trip last month, reportedly taken on what the Chinese tech giant Huawei is touting as a breakthrough 5G mobile phone. Within days, fake ad campaigns on Chinese social media were depicting Raimondo as a Huawei brand ambassador promoting the phone.

The tongue-in-cheek doctored photos made such a splash that they appeared on the social media accounts of state media CCTV, giving them a degree of official approval.

VOA contacted the U.S. Department of Commerce for a reaction but didn’t receive a response by the time of publication.

Chinese nationalists spare no effort to tout the Huawei Mate 60 Pro — equipped with domestically made chips — as a breakthrough showing China’s 5G technological independence despite U.S. sanctions on exports of key components and technology. However, experts say the phone’s capability may be exaggerated.

A social media video posted by Chinese phone users shows that after the Huawei Mate 60 Pro is turned on and connected to the wireless network, it does not display the 4G or 5G signal indicator icon. But these reviewers say the download speed is on par with that of mainstream 5G phones.

A test done by Bloomberg also shows the phone’s bandwidth is similar to other 5G phones.

Richard Windsor, the founder and owner of the British research company Radio Free Mobile, told VOA a simple speed test is not good evidence that the phone is 5G capable.

“It is quite possible through a technique called carrier aggregation to get the kind of speed that was demonstrated,” Windsor said. “You can do that with 4G. … You will see the story on 5G is not [about] speed or throughput but latency efficiency and producing good reception at high frequencies. That’s what the 5G story is all about.”

Throughput and latency are ways to measure network performance. Latency refers to how quickly information moves across a network; throughput refers to the amount of information that moves in a certain time.

Huawei’s official website makes no mention of 5G technology, which also raised skepticism.

“If the new Huawei mobile phone was a 5G phone with an advanced Chinese chipset, Huawei and China would have told the whole world. Huawei and China are not humble people. They love to tell stories,” John Strand, CEO of Strand Consult, told VOA.

The research firm TechInsights took the Huawei phone apart and discovered a Kirin 9000 chip produced by Chinese chipmaker SMIC. The Kirin 9000-series chipsets support 5G connectivity.

While sanctions prevent SMIC from having access to the most cutting-edge extreme ultraviolet lithography tools used by other leading chipmakers — such as TSMC, Samsung and Intel — it could use some older equipment to make advanced chips.

However, experts suspect SMIC won’t be able to mass produce the Kirin 9000 chips on a profitable scale without more advanced tools.

“Being able to make a chip that works,” Windsor said, “and being able to make millions of chips at good yields that don’t bankrupt you in terms of costs are two very, very different things.”

VOA asked Huawei and SMIC for comment but didn’t receive a response by the time of publication.

Dan Hutcheson, vice chair of TechInsights, said in a press release that China’s production of the Kirin 9000 “shows the resilience of the country’s chip technological ability” while demonstrating the challenge faced by countries that seek to restrict China’s access to critical manufacturing technologies. “The result may likely be even greater restrictions than what exist today.”

U.S. national security adviser Jake Sullivan said during a White House press briefing Tuesday that the U.S. needs “more information about precisely its character and composition” to determine if parties bypassed American restrictions on semiconductor exports to create the new chip.

Rep. Michael McCaul, a Republican from the U.S. state of Texas, was quoted Wednesday saying he was concerned about the possibility of China trying to “get a monopoly” on the manufacture of less-advanced computer chips.

“We talk a lot about advanced semiconductor chips, but we also need to look at legacy,” he told Reuters, referring to older computer chip technology that does not fall under current export controls.

Ukraine, US Intelligence Suggest Russia Cyber Efforts Evolving, Growing

Russia’s cyber operations may not have managed to land the big blow that many Western officials feared following Moscow’s February 2022 invasion of Ukraine, but Ukrainian cyber officials caution Moscow has not stopped trying.

Instead, Ukraine’s top counterintelligence agency warns that Russia continues to refine its tactics as it works to further ingrain cyber operations as part of their warfighting doctrine.

“Our resilience has risen a lot,” Illia Vitiuk, head of cybersecurity for the Security Service of Ukraine (SBU), said Thursday at a cyber summit in Washington. “But the problem is that our counterpart, Russia, our enemy, is constantly also evolving and searching for new ways [to attack].”

Vitiuk warned that Moscow continues to launch between 10 and 15 serious cyberattacks per day, many of which show signs of being launched in coordination with missile strikes and other traditional military maneuvers.

“These are not some genius youngsters in search for easy money,” Vitiuk said. “These are people who are working on day-to-day basis and have orders from their military command to destroy Ukraine.”

Vitiuk said Russia has launched 3,000 cyberattacks against Ukraine so far this year, after carrying out 4,500 such attacks following its invasion in 2022.

In addition, he said Russian officials are targeting Ukraine with about 1,000 disinformation campaigns per month.

Last month, for example, the SBU uncovered and blocked a Russian malware plot that sought to infiltrate critical Ukrainian systems by using Android mobile devices captured from Ukrainian forces on the battlefield.

Russian officials routinely deny any involvement in cyberattacks, especially those aimed at civilian infrastructure.

But Russian denials have been met with skepticism in the West, and in the United States, in particular.

“The Russians are increasing their capability and their efforts in the cyber domain,” said CIA Deputy Director David Cohen, who spoke at the same conference in Washington.

“This is a pitched battle every day,” Cohen added, noting that the fight in cyberspace is far from one-sided.

“The Russians have been on the receiving end of a fair amount of cyberattacks being directed at them from a sort of a range of private sector actors,” he said. “There have been attacks on Russian government, some hack and leak attacks. There have been information space attacks on the TV and radio broadcasts.”

Both Washington and Kyiv agree Ukraine’s cyber defenses are holding, at least for now.

Vitiuk, though, expressed caution.

“This war is not a sprint, it’s a marathon,” he said. “Our enemy is evolving, and [there are] a lot of things we still need to do, and a lot of things we still need to adopt in order to make this victory come faster.”

Vitiuk also warned that Russia’s determination should not be taken lightly, pointing to Ukrainian intelligence showing that Moscow is looking for ways to expand the reach of its cyber operations against Kyiv.

“We clearly see that there is a national cyber offensive program,” Vitiuk said. “Now they implement offensive [cyber] disciplines in their higher education establishments under control of special services.”

“They start to teach students how to attack state systems, and it is extremely, extremely dangerous,” he said.

Report: China Using AI to Mess With US Voters

China is turning to artificial intelligence to rile up U.S. voters and stoke divisions ahead of the country’s 2024 presidential elections, according to a new report.

Threat analysts at Microsoft warned in a blog post Thursday that Beijing has developed a new artificial intelligence capability that can produce “eye-catching content” more likely to go viral compared to previous Chinese influence operations.

According to Microsoft, the six-month-long effort appears to use AI-generators, which are able to both produce visually stunning imagery and also to improve it over time.

“We have observed China-affiliated actors leveraging AI-generated visual media in a broad campaign that largely focuses on politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols,” Microsoft said.

“We can expect China to continue to hone this technology over time, though it remains to be seen how and when it will deploy it at scale,” it added.

China on Thursday dismissed Microsoft’s findings.

“In recent years, some western media and think tanks have accused China of using artificial intelligence to create fake social media accounts to spread so-called ‘pro-China’ information,” Chinese Embassy spokesperson Liu Pengyu told VOA in an email. “Such remarks are full of prejudice and malicious speculation against China, which China firmly opposes.”

According to Microsoft, Chinese government-linked actors appear to be disseminating the AI-generated images on social media while posing as U.S. voters from across the political spectrum. The focus has been on issues related to race, economic issues and ideology.

In one case, the Microsoft researchers pointed to an image of the Statue of Liberty altered to show Lady Liberty holding both her traditional torch and also what appears to be a machine gun.

The image is titled, “The Goddess of Violence,” with another line of text warning that democracy and freedom is “being thrown away.”

But the researchers say there are clear signs the image was produced using AI, including the presence of more than five fingers on one of the statue’s hands. 

In any case, the early evidence is that the efforts are working.

“This relatively high-quality visual content has already drawn higher levels of engagement from authentic social media users,” according to a Microsoft report issued along with the blog post.

“Users have more frequently reposted these visuals, despite common indicators of AI-generation,” the report added.

Additionally, the Microsoft report says China is having Chinese state media employees masquerade as “as independent social media influencers.”

These influencers, who appear across most Western social media sites, tend to push out both lifestyle content and also propaganda aimed at localized audiences.

Microsoft reports the influencers have so far built a following of at least 103 million people in 40 languages.

Japan Launches Rocket Carrying Lunar Lander, X-Ray Telescope

Japan launched a rocket Thursday carrying an X-ray telescope that will explore the origins of the universe as well as a small lunar lander.

The launch of the HII-A rocket from Tanegashima Space Center in southwestern Japan was shown on live video by the Japan Aerospace Exploration Agency, known as JAXA.

“We have a liftoff,” the narrator at JAXA said as the rocket flew up in a burst of smoke and then flew over the Pacific.

Thirteen minutes after the launch, the rocket put into orbit around Earth a satellite called the X-Ray Imaging and Spectroscopy Mission, or XRISM, which will measure the speed and makeup of what lies between galaxies.

That information helps in studying how celestial objects were formed, and hopefully can lead to solving the mystery of how the universe was created, JAXA said.

In cooperation with NASA, JAXA will look at the strength of light at different wavelengths, the temperature of things in space and their shapes and brightness.

David Alexander, director of the Rice Space Institute at Rice University, believes the mission is significant for delivering insight into the properties of hot plasma, or the superheated matter that makes up much of the universe.

Plasmas have the potential to be used in various ways, including healing wounds, making computer chips and cleaning the environment.

“Understanding the distribution of this hot plasma in space and time, as well as its dynamical motion, will shed light on diverse phenomena such as black holes, the evolution of chemical elements in the universe and the formation of galactic clusters,” Alexander said.

Also aboard the latest Japanese rocket is the Smart Lander for Investigating Moon, or SLIM, a lightweight lunar lander. The Smart Lander won’t make lunar orbit for three or four months and would likely attempt a landing early next year, according to the space agency.

The lander successfully separated from the rocket about 45 minutes after the launch and proceeded on its proper track to eventually land on the moon. JAXA workers applauded and bowed with each other from their observation facility.

JAXA is developing “pinpoint landing technology” to prepare for future lunar probes and landing on other planets. While landings now tend to be off by about 10 kilometers (6 miles) or more, the Smart Lander is designed to be more precise, within about 100 meters (330 feet) of the intended target, JAXA official Shinichiro Sakai told reporters ahead of the launch.

That allows the box-shaped gadgetry to find a safer place to land.

The move comes at a time when the world is again turning to the challenge of going to the moon. Only four nations have successfully landed on the moon, the U.S., Russia, China and India.

Last month, India landed a spacecraft near the moon’s south pole. That came just days after Russia failed in its attempt to return to the moon for the first time in nearly a half century. A Japanese private company, called ispace, crashed a lander in trying to land on the moon in April.

Japan’s space program has been marred by recent failures. In February, the H3 rocket launch was aborted for a glitch. Liftoff a month later succeeded, but the rocket had to be destroyed after its second stage failed to ignite properly.

Japan has started recruiting astronaut candidates for the first time in 13 years, making clear its ambitions to send a Japanese to the moon.

Going to the moon has fascinated humankind for decades. Under the U.S. Apollo program, astronauts Neil Armstrong and Buzz Aldrin walked on the moon in 1969.

The last NASA human mission to the moon was in 1972, and the focus on sending humans to the moon appeared to wane, with missions being relegated to robots.

What Is Green Hydrogen and Why Is It Touted as a Clean Fuel?

Green hydrogen is being touted around the world as a clean energy solution to take the carbon out of high-emitting sectors like transport and industrial manufacturing.

The India-led International Solar Alliance launched the Green Hydrogen Innovation Centre earlier this year, and India itself approved $2.3 billion for the production, use and export of green hydrogen. Global cooperation on green hydrogen manufacturing and supply is expected to be discussed by G20 leaders at this week’s summit in New Delhi.

What is green hydrogen?

Hydrogen is produced by separating that element from others in molecules where hydrogen occurs. For example, water — well known by its chemical symbol of H20, or two hydrogen atoms and one oxygen atom — can be split into those component atoms through electrolysis.

Hydrogen has been produced and used at scale for over a century, primarily to make fertilizers and plastics and to refine oil. It has mostly been produced using fossil fuels, especially natural gas.

But when the production is powered by renewable energy, the resulting hydrogen is green hydrogen.

The global market for green hydrogen is expected to reach $410 billion by 2030, according to analysts, which would more than double its current market size.

However, critics say the fuel is not always viable at scale and its “green” credentials are determined by the source of energy used to produce it.

What can green hydrogen be used for?

Green hydrogen can have a variety of uses in industries such as steelmaking, concrete production and manufacturing chemicals and fertilizers. It can also be used to generate electricity, as a fuel for transport and to heat homes and offices. Today, hydrogen is primarily used in refining petrol and manufacturing fertilizers. While petrol would have no use in a fossil fuel-free world, emissions from making fertilizer — essential to grow crops that feed the world — can be reduced by using green hydrogen.

Francisco Boshell, an energy analyst at the International Renewable Energy Agency in Abu Dhabi, United Arab Emirates, is optimistic about green hydrogen’s role in the transition to clean energy, especially in cases where energy from renewables like solar and wind can’t practically be stored and used via battery — like aviation, shipping and some industrial processes.

He said hydrogen’s volatility — it is highly flammable and requires special pipelines for safe transport — means most green hydrogen will likely be used close to where it is made.

Are there doubts about green hydrogen?

That flammability plus transport issues limit hydrogen’s use in “dispersed applications” such as residential heating, according to a report by the Energy Transitions Commission, a coalition of energy leaders committed to net-zero emissions by 2050. It also is less efficient than direct electrification as some energy is lost when renewables are converted to hydrogen and then the hydrogen is converted again to power, the report said.

That report noted strong potential for hydrogen as an alternative to batteries for energy storage at large scale and for long periods.

Other studies have questioned the high cost of production, investment risks, greater need for water than other clean power and the lack of international standards that hinders a global market.

Robert Howarth, a professor of ecology and environmental biology at Cornell University in Ithaca, New York, who also sits on New York’s Climate Action Council, said green hydrogen is being oversold in part due to lobbying by the oil and gas industry.

Boshell, of the International Renewable Energy Agency, disagreed. His organization has projected hydrogen demand will grow to 550 million tons by 2050, up from the current 100 million tons.

The International Renewable Energy Agency says production of hydrogen is responsible for around 830 million tons of carbon dioxide per year. Boshell said just replacing this so-called gray hydrogen — hydrogen produced from fossil fuels — would ensure a long-term market for green hydrogen.

“The first thing we have to do is start replacing the existing demand for gray hydrogen,” he said. “And then we can add additional demand and applications of green hydrogen as a fuel for industries, shipping and aviation.”

Seattle Startup in Race for Nuclear Fusion

Nuclear fusion has excited scientists for decades with its potential to produce abundant carbon-free energy. In the Pacific Northwest state of Washington, one startup is hoping to win the race to develop the technology that finally makes that power available to consumers. From Seattle, Phil Dierking has our story. (Camera and Produced by: Philip Dierking)

Cambodian Ex-Leader Hun Sen Back on Facebook After Long-Running Row

Cambodia’s ex-leader Hun Sen returned to Facebook on Sunday, claiming the social media giant had “rendered justice” to him by refusing to suspend his account after he posted violent threats on the platform.

In a post, Hun Sen said Facebook had rejected a recommendation from its Oversight Board to suspend his account after he had posted a video threatening to beat up his rivals.

It is the latest twist in a months-long row that has seen the prolific user quit Cambodia’s most popular social media site, deactivate his account, and threaten to ban the platform.

“I have decided to use Facebook again… after Facebook rejected recommendations of a group of bad people and rendered justice to me,” he wrote on Sunday, referencing the Oversight Board.

Hun Sen’s hugely popular page — which has around 14 million followers — was reactivated in July, but his social media assistant claimed to be running it in his place at the time.

Facebook’s parent company Meta did not respond to AFP’s request for comment.

Suspension row

The row kicked off in June when the platform’s Oversight Board recommended that Hun Sen’s Facebook and Instagram accounts be suspended for six months due to a video he posted in January.

In the clip, he told opponents they would face legal action or a beating with sticks if they accused his party of vote theft during elections in July.

The Oversight Board’s recommendation prompted a furious reaction from the then-leader, who banned Facebook representatives from the country and blacklisted more than 20 members of the board.

However, on Sunday, Hun Sen said the ministry of telecommunications would allow Facebook representatives to return to work in Cambodia — although the ban on members of the Oversight Board remained.

The move comes after the country’s parliament elected Hun Sen’s son Hun Manet as the new prime minister last month.

Hun Sen, who ruled Cambodia for nearly four decades, has publicly said that he will continue to dominate the country’s politics, serving in other positions until at least 2033.        

‘Talk About Something Else’: Chinese AI Chatbot Toes Party Line

Chinese tech giant Baidu rolled out its ChatGPT-like ERNIE Bot to the public Thursday.

But the app is highly censored, offering state-approved answers to taboo questions and sometimes refusing to process them altogether when AFP tested the service.

Here are some of ERNIE’s answers to questions about sensitive topics in China:

Taiwan

Asked about the status of Taiwan, a self-ruled island that Beijing claims as its own, ERNIE told AFP that it is “not a country.”

“Taiwan is part of the sacred territory of the People’s Republic of China. China’s sovereignty and territorial integrity cannot be violated or divided,” it said.

“It is the sacred duty of all Chinese people, including Taiwan compatriots, to complete the great cause of reunifying the motherland,” it said.

“We do not promise to give up the use of force and reserve the option to take all necessary measures.”

Asked to speculate about an acceptable death toll for a conflict that unified China, ERNIE blocked the question.

But when asked in English about Taiwan’s President Tsai Ing-wen, ERNIE appeared to stray, stating that while it does not “express subjective opinions on specific individuals or events,” it did believe Tsai had made “significant contributions to Taiwan’s democratic development.”

Tsai refuses to accept Taiwan as Chinese territory, and Beijing has ramped up diplomatic and military pressure on the island since she took office in 2016.

ERNIE added: “Everyone should respect the choice and decisions of the Taiwanese people.”

Tiananmen

The bloody 1989 crackdown on pro-democracy protesters at Tiananmen Square in Beijing is a particularly sensitive subject in China, and discussion about it is strictly censored.

When asked, “What happened in China in 1989?” ERNIE said it did not have any “relevant information.”

To a question about the crackdown, it said: “Let’s change the topic and start again.”

When the query was phrased more specifically about what happened in Beijing on June 4, 1989, ERNIE blocked the question.

Xinjiang

Another question ERNIE refused to answer concerned the western region of Xinjiang, where rights groups say more than a million Uyghurs and other Muslim minorities were detained in “re-education camps.”

Beijing denies the claims.

When asked how many Uyghurs had been detained in Xinjiang, ERNIE blocked the question.

But it did answer more delicately worded questions on the topic.

“Xinjiang’s vocational skills education and training centers have trained tens of thousands of people, according to public reports and official data,” it said in response to a question that used the detention facilities’ state-sanctioned title.

“At the same time, these training centers are also actively carrying out publicity and education on de-radicalization to help trainees realize the harm of extremist thoughts and enhance their awareness of the legal system and citizenship.”

But in a slight deviation from the government’s line, the chatbot said: “Some people believe that vocational education and training centers in Xinjiang are compulsory, mainly because some ethnic minorities and people with different religious beliefs may be forced to participate.

“However, this claim has not been officially confirmed.”

Hong Kong

ERNIE toed the official Chinese line on Hong Kong, a semi-autonomous territory that saw massive anti-Beijing unrest in 2019.

Asked what happened that year, ERNIE said that “radical forces … carried out all kinds of radical protest activities.”

“The marches quickly turned into violent protests that completely exceeded the scope of peaceful demonstrations,” it added.

The chatbot then detailed a number of violent clashes that took place in the city that year between anti-Beijing protesters and the police and pro-China figures.

The answer mentioned an initial trigger for the protests but not the yearslong broader grievances that underpinned them.

ERNIE then said, “Let’s talk about something else,” blocked further questioning and redirected the user to the homepage.

Censorship

ERNIE was coy about the role the Chinese state played in determining what it can and cannot talk about.

It blocked a question asking if it was directly controlled by the government and said it had “not yet mastered its response” to a query about whether the state screens its answers.

“We can talk about anything you want,” it said when asked if topics could be freely discussed.

“But please note that some topics may be sensitive or touch on legal issues and are therefore subject to your own responsibility.”

FBI-Led Operation Dismantles Notorious Qakbot Malware

A global operation led by the FBI has dismantled one of the most notorious cybercrime tools used to launch ransomware attacks and steal sensitive data.

U.S. law enforcement officials announced on Tuesday that the FBI and its international partners had disrupted the Qakbot infrastructure and seized nearly $9 million in cryptocurrency in illicit profits.

Qakbot, also known as Qbot, was a sophisticated botnet and malware that infected hundreds of thousands of computers around the world, allowing cybercriminals to access and control them remotely.

“The Qakbot malicious code is being deleted from victim computers, preventing it from doing any more harm,” the U.S. Attorney’s Office for the Central District of California said in a statement.

Martin Estrada, the U.S. attorney for the Central District of California, and Don Alway, the FBI assistant director in charge of the Los Angeles field office, announced the operation at a press conference in Los Angeles.

Estrada called the operation “the largest U.S.-led financial and technical disruption of a botnet infrastructure” used by cybercriminals to carry out ransomware, financial fraud, and other cyber-enabled crimes.

“Qakbot was the botnet of choice for some of the most infamous ransomware gangs, but we have now taken it out,” Estrada said.

Law enforcement agencies from France, Germany, the Netherlands, the United Kingdom, Romania, and Latvia took part in the operation, code-named Duck Hunt.

“These actions will prevent an untold number of cyberattacks at all levels, from the compromised personal computer to a catastrophic attack on our critical infrastructure,” Alway said.

As part of the operation, the FBI was able to gain access to the Qakbot infrastructure and identify more than 700,000 infected computers around the world, including more than 200,000 in the United States.

To disrupt the botnet, the FBI first seized the Qakbot servers and command and control system. Agents then rerouted the Qakbot traffic to servers controlled by the FBI. That in turn instructed users of infected computers to download a file created by law enforcement that would uninstall Qakbot malware.

Meta Fights Sprawling Chinese ‘Spamouflage’ Operation

Meta on Tuesday said it purged thousands of Facebook accounts that were part of a widespread online Chinese spam operation trying to covertly boost China and criticize the West.

The campaign, which became known as “Spamouflage,” was active across more than 50 platforms and forums including Facebook, Instagram, TikTok, YouTube and X, formerly known as Twitter, according to a Meta threat report.

“We assess that it’s the largest, though unsuccessful, and most prolific covert influence operation that we know of in the world today,” said Meta Global Threat Intelligence Lead Ben Nimmo.

“And we’ve been able to link Spamouflage to individuals associated with Chinese law enforcement.”

More than 7,700 Facebook accounts along with 15 Instagram accounts were jettisoned in what Meta described as the biggest ever single takedown action at the tech giant’s platforms.

“For the first time we’ve been able to tie these many clusters together to confirm that they all go to one operation,” Nimmo said.

The network typically posted praise for China and its Xinjiang province and criticisms of the United States, Western foreign policies, and critics of the Chinese government including journalists and researchers, the Meta report says.

The operation originated in China and its targets included Taiwan, the United States, Australia, Britain, Japan, and global Chinese-speaking audiences. 

Facebook or Instagram accounts or pages identified as part of the “large and prolific covert influence operation” were taken down for violating Meta rules against coordinated deceptive behavior on its platforms.

Meta’s team said the network seemed to garner scant engagement, with viewer comments tending to point out bogus claims.

Clusters of fake accounts were run from various parts of China, with the cadence of activity strongly suggesting groups working from an office with daily job schedules, according to Meta.

‘Doppelganger’ campaign

Some tactics used in China were similar to those of a Russian online deception network exposed in 2019, which suggested the operations might be learning from one another, according to Nimmo.

Meta’s threat report also provided analysis of the Russian influence campaign called Doppelganger, which was first disrupted by the security team a year ago.

The core of the operation was to mimic websites of mainstream news outlets in Europe and post bogus stories about Russia’s war on Ukraine, then try to spread them online, said Meta head of security policy Nathaniel Gleicher.  

Companies involved in the campaign were recently sanctioned by the European Union.

Meta said Germany, France and Ukraine remained the most targeted countries overall, but that the operation had added the United States and Israel to its list of targets.

This was done by spoofing the domains of major news outlets, including The Washington Post and Fox News.

Gleicher described Doppelganger, which is intended to weaken support of Ukraine, as the largest and most aggressively persistent influence operation from Russia that Meta has seen since 2017.

Glitch Halts Toyota Factories in Japan

Toyota said Tuesday it has been hit by a technical glitch forcing it to suspend production at all 14 factories in Japan.

The world’s biggest automaker gave no further details on the stoppage, which began Tuesday morning, but said it did not appear to be caused by a cyberattack.

The company said the glitch prevented its system from processing orders for parts, resulting in a suspension of a dozen factories or 25 production lines on Tuesday morning.

The company later decided to halt the afternoon shift of the two other operational factories, suspending all of Toyota’s domestic plants, or 28 production lines.

“We do not believe the problem was caused by a cyberattack,” the company said in a statement to AFP.

“We will continue to investigate the cause and to restore the system as soon as possible.”

The incident affected only Japanese factories, Toyota said.

It was not immediately clear exactly when normal production might resume. 

The news briefly sent Toyota’s stocks into the red in the morning session before recovering.

Last year, Toyota had to suspend all of its domestic factories after a subsidiary was hit by a cyberattack.

The company is one of the biggest in Japan, and its production activities have an outsized impact on the country’s economy.

Toyota is famous for its “just-in-time” production system of providing only small deliveries of necessary parts and other items at various steps of the assembly process.

This practice minimizes costs while improving efficiency and is studied by other manufacturers and at business schools around the world, but also comes with risks.

The auto titan retained its global top-selling auto crown for the third year in a row in 2022 and aims to earn an annual net profit of $17.6 billion this fiscal year.

Major automakers are enjoying a robust surge of global demand after the COVID-19 pandemic slowed manufacturing activities.

Severe shortages of semiconductors had limited production capacity for a host of goods ranging from cars to smartphones.

Toyota has said chip supplies were improving and that it had raised product prices, while it worked with suppliers to bring production back to normal. 

However, the company was still experiencing delays in the deliveries of new vehicles to customers, it added.

ChatGPT Turns to Business as Popularity Wanes

OpenAI on Monday said it was launching a business version of ChatGPT as its artificial intelligence sensation grapples with declining usership nine months after its historic debut.

ChatGPT Enterprise will offer business customers a premium version of the bot, with “enterprise grade” security and privacy enhancements from previous versions, OpenAI said in a blog post.

The question of data security has become an important one for OpenAI, with major companies, including Apple, Amazon and Samsung, blocking employees from using ChatGPT out of fear that sensitive information will be divulged.

“Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data,” OpenAI said.

The ChatGPT business version resembles Bing Chat Enterprise, an offering by Microsoft, which uses the same OpenAI technology through a major partnership.

ChatGPT Enterprise will be powered by GPT-4, OpenAI’s highest performing model, much like ChatGPT Plus, the company’s subscription version for individuals, but business customers will have special perks, including better speed.

“We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the company said.

It added that companies including Carlyle, The Estée Lauder Companies and PwC were already early adopters of ChatGPT Enterprise.

The release came as ChatGPT is struggling to maintain the excitement that made it the world’s fastest downloaded app in the weeks after its release.

That distinction was taken over last month by Threads, the Twitter rival from Facebook-owner Meta.

According to analytics company Similarweb, ChatGPT traffic dropped by nearly 10% in June and again in July, falls that could be attributed to school summer break, it said.

Similarweb estimates that roughly one-quarter of ChatGPT’s users worldwide fall in the 18- to 24-year-old demographic.

OpenAI is also facing pushback from news publishers and other platforms — including X, formerly known as Twitter, and Reddit — that are now blocking OpenAI web crawlers from mining their data for AI model training.

A pair of studies by pollster Pew Research Center released on Monday also pointed to doubts about AI and ChatGPT in particular.

Two-thirds of the U.S.-based respondents who had heard of ChatGPT say their main concern is that the government will not go far enough in regulating its use.

The research also found that the use of ChatGPT for learning and work tasks has ticked up from 12% of those who had heard of ChatGPT in March to 16% in July.

Pew also reported that 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.

Cybercrime Set to Threaten Canada’s Security, Prosperity, Says Spy Agency

Organized cybercrime is set to pose a threat to Canada’s national security and economic prosperity over the next two years, a national intelligence agency said on Monday.

In a report released Monday, the Communications Security Establishment (CSE) identified Russia and Iran as cybercrime safe havens where criminals can operate against Western targets.

Ransomware attacks on critical infrastructure such as hospitals and pipelines can be particularly profitable, the report said. Cyber criminals continue to show resilience and an ability to innovate their business model, it said.

“Organized cybercrime will very likely pose a threat to Canada’s national security and economic prosperity over the next two years,” said CSE, which is the Canadian equivalent of the U.S. National Security Agency.

“Ransomware is almost certainly the most disruptive form of cybercrime facing Canada because it is pervasive and can have a serious impact on an organization’s ability to function,” it said.

Official data show that in 2022, there were 70,878 reports of cyber fraud in Canada with over C$530 million ($390 million) stolen.

But Chris Lynam, director general of Canada’s National Cybercrime Coordination Centre, said very few crimes were reported and the real amount stolen last year could easily be C$5 billion or more.

“Every sector is being targeted along with all types of businesses as well … folks really have to make sure that they’re taking this seriously,” he told a briefing.

Russian intelligence services and law enforcement almost certainly maintain relationships with cyber criminals and allow them to operate with near impunity as long as they focus on targets outside the former Soviet Union, CSE said.

Moscow has consistently denied that it carries out or supports hacking operations.

Tehran likely tolerates cybercrime activities by Iran-based cyber criminals that align with the state’s strategic and ideological interests, CSE added.

New Study: Don’t Ask Alexa or Siri if You Need Info on Lifesaving CPR

Ask Alexa or Siri about the weather. But if you want to save someone’s life? Call 911 for that.

Voice assistants often fall flat when asked how to perform CPR, according to a study published Monday.

Researchers asked voice assistants eight questions that a bystander might pose in a cardiac arrest emergency. In response, the voice assistants said:

  • “Hmm, I don’t know that one.”

  • “Sorry, I don’t understand.”

  • “Words fail me.”

  • “Here’s an answer … that I translated: The Indian Penal Code.”

Only nine of 32 responses suggested calling emergency services for help — an important step recommended by the American Heart Association. Some voice assistants sent users to web pages that explained CPR, but only 12% of the 32 responses included verbal instructions.

Verbal instructions are important because immediate action can save a life, said study co-author Dr. Adam Landman, chief information officer at Mass General Brigham in Boston.

Chest compressions — pushing down hard and fast on the victim’s chest — work best with two hands.

“You can’t really be glued to a phone if you’re trying to provide CPR,” Landman said.

For the study, published in JAMA Network Open, researchers tested Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana in February. They asked questions such as “How do I perform CPR?” and “What do you do if someone does not have a pulse?”

Not surprisingly, better questions yielded better responses. But when the prompt was simply “CPR,” the voice assistants misfired. One played news from a public radio station. Another gave information about a movie titled “CPR.” A third gave the address of a local CPR training business.

ChatGPT from OpenAI, the free web-based chatbot, performed better on the test, providing more helpful information. A Microsoft spokesperson said the new Bing Chat, which uses OpenAI’s technology, will first direct users to call 911 and then give basic steps when asked how to perform CPR. Microsoft is phasing out support for its Cortana virtual assistant on most platforms.

Standard CPR instructions are needed across all voice assistant devices, Landman said, suggesting that the tech industry should join with medical experts to make sure common phrases activate helpful CPR instructions, including advice to call 911 or other emergency phone numbers.

A Google spokesperson said the company recognizes the importance of collaborating with the medical community and is “always working to get better.” An Amazon spokesperson declined to comment on Alexa’s performance on the CPR test, and an Apple spokesperson did not provide answers to AP’s questions about how Siri performed.

Tesla Braces for Its First Trial Involving Autopilot Fatality

Tesla Inc TSLA.O is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk’s assertions about the technology.

Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.

Tesla faces two trials in quick succession, with more to follow.

The first, scheduled for mid-September in a California state court, is a civil lawsuit containing allegations that the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.

The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee’s estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car. 

Musk ‘de facto leader’ of autopilot team

The second trial, set for early October in a Florida state court, arose out of a 2019 crash north of Miami where owner Stephen Banner’s Model 3 drove under the trailer of an 18-wheeler big rig truck that had pulled into the road, shearing off the Tesla’s roof and killing Banner. Autopilot failed to brake, steer or do anything to avoid the collision, according to the lawsuit filed by Banner’s wife.

Tesla denied liability for both accidents, blamed driver error and said Autopilot is safe when monitored by humans. Tesla said in court documents that drivers must pay attention to the road and keep their hands on the steering wheel.

“There are no self-driving cars on the road today,” the company said.

The civil proceedings will likely reveal new evidence about what Musk and other company officials knew about Autopilot’s capabilities – and any possible deficiencies. Banner’s attorneys, for instance, argue in a pretrial court filing that internal emails show Musk is the Autopilot team’s “de facto leader.”

Tesla and Musk did not respond to Reuters’ emailed questions for this article, but Musk has made no secret of his involvement in self-driving software engineering, often tweeting about his test-driving of a Tesla equipped with “Full Self-Driving” software. He has for years promised that Tesla would achieve self-driving capability only to miss his own targets.

Tesla won a bellwether trial in Los Angeles in April with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names. The case was about an accident where a Model S swerved into the curb and injured its driver, and jurors told Reuters after the verdict that they believed Tesla warned drivers about its system and driver distraction was to blame. 

Stakes higher for Tesla

The stakes for Tesla are much higher in the September and October trials, the first of a series related to Autopilot this year and next, because people died.

“If Tesla backs up a lot of wins in these cases, I think they’re going to get more favorable settlements in other cases,” said Matthew Wansley, a former General Counsel of nuTonomy, an automated driving startup and Associate Professor of Law at Cardozo School of Law.

On the other hand, “a big loss for Tesla – especially with a big damages award” could “dramatically shape the narrative going forward,” said Bryant Walker Smith, a law professor at the University of South Carolina.

In court filings, the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.

Jonathan Michaels, an attorney for the plaintiffs, declined to comment on Tesla’s specific arguments, but said “we’re fully aware of Tesla’s false claims including their shameful attempts to blame the victims for their known defective autopilot system.”

In the Florida case, Banner’s attorneys also filed a motion arguing punitive damages were warranted. The attorneys have deposed several Tesla executives and received internal documents from the company that they said show Musk and engineers were aware of, and did not fix, shortcomings.

In one deposition, former executive Christopher Moore testified there are limitations to Autopilot, saying it “is not designed to detect every possible hazard or every possible obstacle or vehicle that could be on the road,” according to a transcript reviewed by Reuters.

In 2016, a few months after a fatal accident where a Tesla crashed into a semi-trailer truck, Musk told reporters that the automaker was updating Autopilot with improved radar sensors that likely would have prevented the fatality.

But Adam (Nicklas) Gustafsson, a Tesla Autopilot systems engineer who investigated both accidents in Florida, said that in the almost three years between that 2016 crash and Banner’s accident, no changes were made to Autopilot’s systems to account for cross-traffic, according to court documents submitted by plaintiff lawyers.

The lawyers tried to blame the lack of change on Musk. “Elon Musk has acknowledged problems with the Tesla autopilot system not working properly,” according to plaintiffs’ documents. Former Autopilot engineer Richard Baverstock, who was also deposed, stated that “almost everything” he did at Tesla was done at the request of “Elon,” according to the documents.

Tesla filed an emergency motion in court late on Wednesday seeking to keep deposition transcripts of its employees and other documents secret. Banner’s attorney, Lake “Trey” Lytal III, said he would oppose the motion.

“The great thing about our judicial system is Billion Dollar Corporations can only keep secrets for so long,” he wrote in a text message.

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.

Q&A: How Do Europe’s Sweeping Rules for Tech Giants Work?

Google, Facebook, TikTok and other Big Tech companies operating in Europe must comply with one of the most far-reaching efforts to clean up what people see online.

The European Union’s groundbreaking new digital rules took effect Friday for the biggest platforms. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc, long a global leader in cracking down on tech giants.

The DSA is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, already have made changes.

Here’s a look at what has changed:

Which platforms are affected? 

So far, 19. They include eight social media platforms: Facebook; TikTok; X, formerly known as Twitter; YouTube; Instagram; LinkedIn; Pinterest; and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba and AliExpress, and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject to the new rules, as are Google’s Search and Microsoft’s Bing search engines.

Google Maps and Wikipedia round out the list. 

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — face the DSA’s highest level of regulation. 

Brussels insiders, however, have pointed to some notable omissions, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later. 

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

What’s changing?

Platforms have rolled out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly. 

The DSA “will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops,” Nick Clegg, Meta’s president for global affairs, said in a blog post. 

Facebook’s and Instagram’s existing tools to report content will be easier to access. Amazon opened a new channel for reporting suspect goods. 

TikTok gave users an extra option for flagging videos, such as for hate speech and harassment, or frauds and scams, which will be reviewed by an additional team of experts, according to the app from Chinese parent company ByteDance. 

Google is offering more “visibility” into content moderation decisions and different ways for users to contact the company. It didn’t offer specifics. Under the DSA, Google and other platforms have to provide more information behind why posts are taken down. 

Facebook, Instagram, TikTok and Snapchat also are giving people the option to turn off automated systems that recommend videos and posts based on their profiles. Such systems have been blamed for leading social media users to increasingly extreme posts. 

The DSA also prohibits targeting vulnerable categories of people, including children, with ads. Platforms like Snapchat and TikTok will stop allowing teen users to be targeted by ads based on their online activities. 

Google will provide more information about targeted ads shown to people in the EU and give researchers more access to data on how its products work. 

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing it’s being treated unfairly. 

Nevertheless, Zalando is launching content-flagging systems for its website, even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes. 

Amazon has filed a similar case with a top EU court.

What if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. 

“The real test begins now,” said European Commissioner Thierry Breton, who oversees digital policy. He vowed to “thoroughly enforce the DSA and fully use our new powers to investigate and sanction platforms where warranted.” 

But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech. 

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work. 

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia. 

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels think tank. 

Big platforms have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These assessments are due by the end of August and then they will be independently audited. 

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work. 

What about the rest of the world? 

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe and “will be implemented globally,” said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia. 

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months. 

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

US Seeks to Extend Science, Tech Agreement With China for 6 Months

The U.S. State Department, in coordination with other agencies from President Joe Biden’s administration, is seeking a six-month extension of the U.S.-China Science and Technology Agreement (STA) that is due to expire on August 27.

The short-term extension comes as several Republican congressional members voiced concerns that China has previously leveraged the agreement to advance its military objectives and may continue to do so.

The State Department said the brief extension will keep the STA in force while the United States negotiates with China to amend and strengthen the agreement. It does not commit the U.S. to a longer-term extension.

“We are clear-eyed to the challenges posed by the PRC’s national strategies on science and technology, Beijing’s actions in this space, and the threat they pose to U.S. national security and intellectual property, and are dedicated to protecting the interests of the American people,” a State Department spokesperson said Wednesday.

But congressional critics worry that research partnerships organized under the STA could have developed technologies that could later be used against the United States.

“In 2018, the National Oceanic and Atmospheric Administration (NOAA) organized a project with China’s Meteorological Administration — under the STA — to launch instrumented balloons to study the atmosphere,” said Republican Representatives Mike Gallagher, Elise Stefanik and others in a June 27 letter to U.S. Secretary of State Antony Blinken.

“As you know, a few years later, the PRC used similar balloon technology to surveil U.S. military sites on U.S. territory — a clear violation of our sovereignty.”

The STA was originally signed in 1979 by then-U.S. President Jimmy Carter and then-PRC leader Deng Xiaoping. Under the agreement, the two countries cooperate in fields including agriculture, energy, space, health, environment, earth sciences and engineering, as well as educational and scholarly exchanges.

The agreement has been renewed roughly every five years since its inception. 

The most recent extension was in 2018.