All posts by MTechnology

AI Anxiety: Workers Fret Over Uncertain Future

The tidal wave of artificial intelligence (AI) barrelling toward many professions has generated deep anxiety among workers fearful that their jobs will be swept away — and the mental health impact is rising.

The launch in November 2022 of ChatGPT, the generative AI platform capable of handling complex tasks on command, marked a tech landmark as AI started to transform the workplace.

“Anything new and unknown is anxiety-producing,” Clare Gustavsson, a New York therapist whose patients have shared concerns about AI, told AFP.

“The technology is growing so fast, it is hard to gain sure footing.”

Legal assistants, programmers, accountants and financial advisors are among those professions feeling threatened by generative AI that can quickly create human-like prose, computer code, articles or expert insight.

Goldman Sachs analysts see generative AI impacting, if not eliminating, some 300 million jobs, according to a study published in March.

“I anticipate that my job will become obsolete within the next 10 years,” Eric, a bank teller, told AFP, declining to give his second name.

“I plan to change careers. The bank I work for is expanding AI research.”

Trying to ’embrace the unknown’

New York therapist Meris Powell told AFP of an entertainment professional worried about AI being used in film and television production — a threat to actors and screenwriters that is a flashpoint in strikes currently gripping Hollywood.

“It’s mainly people who are in creative fields who are at the forefront of that concern,” Gustavsson said.

AI is bringing with it a level of apprehension matched by climate change and the Covid-19 pandemic, she contended.

But she said that she tries to get patients to “embrace the unknown” and find ways to use new technology to their advantage.

For one graphic animator in New York, the career-threatening shock came from seeing images generated by AI-infused software such as Midjourney and Stable Diffusion that rivaled the quality of those created by humans.

“People started to realize that some of the skills they had developed and specialized in could possibly be replaced by AI,” she told AFP, adding she had honed her coding skills, but now feels even that has scant promise in an AI world.

“I’ll probably lean into more of a management-level role,” she said. “It’s just hard because there are a lot less of those positions.

“Before I would just pursue things that interested me and skills that I enjoy. Now I feel more inclined to think about what’s actually going to be useful and marketable in the future.”

Peter Vukovic, who has been chief technology officer at several startups, expects just one percent or less of the population to benefit from AI.

“For the rest, it’s a gray area,” Vukovic, who lives in Bosnia, said. “There is a lot of reason for 99 percent of people to be concerned.”

AI is focused on efficiency and making money, but it could be channeled to serve other purposes, Vukovic said.

“What’s the best way for us to use this?” he asked. “Is it really just to automate a bunch of jobs?”

NASA Back in Touch With Voyager 2 After ‘Interstellar Shout’

NASA has succeeded in reestablishing full contact with Voyager 2 by using its highest-power transmitter to send an “interstellar shout” that righted the distant probe’s antenna orientation, the space agency said Friday.

Launched in 1977 to explore the outer planets and serve as a beacon of humanity to the wider universe, it is currently more than 19.9 billion kilometers from our planet — well beyond the solar system. 

A series of planned commands sent to the spaceship on July 21 mistakenly caused the antenna to point 2 degrees away from Earth, compromising its ability to send and receive signals and endangering its mission.

The situation was not expected to be resolved until at least Oct. 15 when Voyager 2 was scheduled to carry out an automated realignment maneuver.

But Tuesday, engineers enlisted the help of multiple Earth observatories that form the Deep Space Network to detect a carrier or “heartbeat” wave from Voyager 2, though the signal was still too faint to read the data it carried.

In an update on Friday, NASA’s Jet Propulsion Laboratory (JPL), which built and operates the probe, said it had succeeded in a longshot effort to send instructions that righted the craft.

“The Deep Space Network used the highest-power transmitter to send the command (the 100-kw S-band uplink from the Canberra site) and timed it to be sent during the best conditions during the antenna tracking pass in order to maximize possible receipt of the command by the spacecraft,” Voyager project manager Suzanne Dodd told AFP.

This so-called “interstellar shout” required 18.5 hours traveling at light speed to reach Voyager, and it took 37 hours for mission controllers to learn whether the command worked, JPL said in a statement.

The probe began returning science and telemetry data at 12:29 a.m. Eastern Time on Friday, “indicating it is operating normally and that it remains on its expected trajectory,” JPL added.

‘Golden records’

Voyager 2 left the protective magnetic bubble provided by the sun, called the heliosphere, in December 2018, and is currently traveling through the space between the stars.

Before leaving our solar system, it explored Jupiter and Saturn, and became the first and so far only spacecraft to visit Uranus and Neptune.

Voyager 2’s twin, Voyager 1, was mankind’s first spacecraft to enter the interstellar medium, in 2012, and is currently almost 24 billion kilometers from Earth.

Both carry “Golden Records” — 30-centimeter, gold-plated copper disks intended to convey the story of our world to extraterrestrials.

These include a map of our solar system, a piece of uranium that serves as a radioactive clock allowing recipients to date the spaceship’s launch, and symbols that convey how to play the record.

The contents of the discs, selected for NASA by a committee chaired by legendary astronomer Carl Sagan, include encoded images of life on Earth, as well as music and sounds that can be played using an included stylus.

For now, the Voyagers continue to transmit scientific data to Earth, though their power banks are expected to eventually be depleted sometime after 2025.

They will then continue to wander the Milky Way, potentially for eternity, in silence. 

Australian Lawmakers Highlight Social Media’s Threat to National Security

A parliamentary committee investigating foreign interference in Australia has found that Chinese apps TikTok and WeChat could present major security risks.

In April, Australia said it would ban TikTok on government devices because of security fears. 

Lawmakers in Australia have sounded the alarm about the nefarious rise of social media and its power to spread disinformation and undermine trust. 

The Senate Select Committee on Foreign Interference through Social Media said that foreign interference was Australia’s most pressing national security threat. The parliamentary inquiry in Canberra found that the increased use of social media, including Chinese-owned apps TikTok and WeChat, could “corrupt our decision-making, political discourse and societal norms.”   

The report stated that “the Chinese government can require these social media companies to secretly cooperate with Chinese intelligence agencies.” 

Committee makes recommendations

The committee in Canberra has made 17 recommendations, including extending an April 2023 ban on TikTok on Australian government issued devices to include WeChat, with the threat of fines and nationwide bans if the apps breach transparency guidelines.   

Senator James Paterson is the head of the committee as well as Shadow Cyber Security Minister. He told the Australian Broadcasting Corp. Wednesday that the apps were guilty of spreading disinformation.  

“It is absolutely rife and it is occurring on all social media platforms,” said Paterson. “It is absolutely critical that any social media platform operating in Australia of any scale is able to be subject to Australian laws and regulation, and the oversight of our regulatory agencies and our parliament.”   

The Canberra government said it was considering all the committee’s recommendations. A government spokesperson asserted that foreign governments have used social media to harass diaspora and spread disinformation.  

TikTok responds

In a statement, TikTok said that while it disagreed with the way it had been characterized by the parliamentary inquiry, it welcomed the committee’s decision to not recommend an outright ban.   

It added that TikTok remained “committed to continuing an open and transparent dialogue with all levels of Australian government.” 

There has been no comment, so far, from WeChat.   

Meta, which owns Facebook, had previously told the inquiry that it had removed more than 200 foreign interference operations since 2017.  The U.S. company has warned that the internet’s democratic principles were increasingly being challenged by “strong forces.” 

Amazon Adds US-Wide Video Telemedicine Visits to Its Virtual Clinic

Amazon is adding video telemedicine visits in all 50 states to a virtual clinic it launched last fall, as the e-commerce giant pushes deeper into care delivery.

Amazon said Tuesday that customers can visit its virtual clinic around the clock through Amazon’s website or app. There, they can compare prices and response times before picking a telemedicine provider from several options.

The clinic, which doesn’t accept insurance, launched last fall with a focus on text message-based consultations. Those remain available in 34 states.

Virtual care, or telemedicine, exploded in popularity during the COVID-19 pandemic. It has remained popular as a convenient way to check in with a doctor or deal with relatively minor health issues like pink eye.

Amazon says its clinic offers care for more than 30 common health conditions. Those include sinus infections, acne, COVID-19 and acid reflux. The clinic also offers treatments for motion sickness, seasonal allergies and several sexual health conditions, including erectile dysfunction.

It also provides birth control and emergency contraception.

Chief Medical Officer Dr. Nworah Ayogu said in a blog post that the clinic aims to remove barriers to help people treat “everyday health concerns.”

“As a doctor, I’ve seen firsthand that patients want to be healthy but lack the time, tools, or resources to effectively manage their care,” Ayogu wrote.

Amazon said messaging-based consultations cost $35 on average while video visits cost $75.

That’s cheaper than the cost of many in-person visits with a doctor, which can run over $100 for people without insurance or coverage that makes them pay a high deductible.

While virtual visits can improve access to help, some doctors worry that they also lead to care fragmentation and can make it harder to track a patient’s overall health. That could happen if a patient has a regular doctor who doesn’t learn about the virtual visit from another provider.

In addition to virtual care, Amazon also sells prescription drugs through its Amazon Pharmacy business and has been building its presence with in-patient care.

Earlier this year, Amazon also closed a $3.9 billion acquisition of the membership-based primary care provider One Medical, which had about 815,000 customers and 214 medical offices in more than 20 markets.

One Medical offers both in-person care and virtual visits.

Anti-monopoly groups had called on the Federal Trade Commission to block the deal, arguing it would endanger patient privacy and help make the retailer more dominant in the marketplace. The agency didn’t block the deal but said it won’t rule out future challenges.

That deal was the first acquisition made under Amazon CEO Andy Jassy, who took over from founder Jeff Bezos in 2021. Jassy sees health care as a growth opportunity for the company.

Meta to Ask EU Users’ Consent to Share Data for Targeted Ads

Social media giant Meta on Tuesday said it intends to ask European Union-based users to give their consent before allowing targeted advertising on its networks including Facebook, bowing to pressure from European regulators.

It said the changes were to address “evolving and emerging regulatory requirements” amid a bruising tussle with the Irish Data Protection Commission that oversees EU data rules in Ireland, out of which Meta runs its European operations.

European regulators in January had dismissed the previous legal basis — “legitimate interest” — Meta had used to justify gathering users’ personal data for targeted advertising.

Currently, users joining Facebook and Instagram by default have that permission turned on, feeding their data to Meta so it can generate billions of dollars from such ads.

“Today, we are announcing our intention to change the legal basis that we use to process certain data for behavioral advertising for people in the EU, EEA [European Economic Area] and Switzerland from ‘Legitimate Interests’ to ‘Consent’,” Meta said in a blog post.

Meta added it will share more information in the months ahead as it continues to “constructively engage” with regulators.

“There is no immediate impact to our services in the region. Once this change is in place, advertisers will still be able to run personalized advertising campaigns to reach potential customers and grow their businesses,” it said.

Meta and other U.S. Big Tech companies have been hit by massive fines over their business practices in the EU in recent years and have been impacted by the need to comply with the bloc’s strict data privacy regulations.

Further effects are expected from the EU’s landmark Digital Markets Act, which bans anti-competitive behavior by the so-called “gatekeepers” of the internet.

Flashing ‘X’ Sign Removed From Former Twitter’s Headquarters

A brightly flashing “X” sign has been removed from the San Francisco headquarters of the company formerly known as Twitter just days after it was installed. 

The San Francisco Department of Building Inspection said Monday it received 24 complaints about the unpermitted structure over the weekend. Complaints included concerns about its structural safety and illumination. 

The Elon Musk-owned company, which has been rebranded as X, had removed the Twitter sign and iconic blue bird logo from the building last week. That work was temporarily paused because the company did not have the necessary permits. For a time, the “er” at the end of “Twitter” remained up due to the abrupt halt of the sign takedown. 

The city of San Francisco had opened a complaint and launched an investigation into the giant “X” sign, which was installed Friday on top of the downtown building as Musk continues his rebrand of the social media platform. 

 

 

The chaotic rebrand of Twitter’s building signage is similar to the haphazard way in which the Twitter platform is being turned into X. While the X logo has replaced Twitter on many parts of the site and app, remnants of Twitter remain. 

Representatives for X did not immediately respond to a message for comment Monday. 

China Curbs Drone Exports, Citing Ukraine, Concern About Military Use

China imposed restrictions Monday on exports of long-range civilian drones, citing Russia’s war in Ukraine and concern that drones might be converted to military use. 

Chinese leader Xi Jinping’s government is friendly with Moscow but says it is neutral in the 18-month-old war. It has been stung by reports that both sides might be using Chinese-made drones for reconnaissance and possibly attacks. 

Export controls will take effect Tuesday to prevent use of drones for “non-peaceful purposes,” the Ministry of Commerce said in a statement. It said exports still will be allowed but didn’t say what restrictions it would apply. 

China is a leading developer and exporter of drones. DJI Technology Co., one of the global industry’s top competitors, announced in April 2022 it was pulling out of Russia and Ukraine to prevent its drones from being used in combat. 

“The risk of some high specification and high-performance civilian unmanned aerial vehicles being converted to military use is constantly increasing,” the Ministry of Commerce said. 

Restrictions will apply to drones that can fly beyond the natural sight distance of operators or stay aloft more than 30 minutes, have attachments that can throw objects and weigh more than seven kilograms (15½ pounds), according to the ministry. 

“Since the crisis in Ukraine, some Chinese civilian drone companies have voluntarily suspended their operations in conflict areas,” the Ministry of Commerce said. It accused the United States and Western media of spreading “false information” about Chinese drone exports. 

The government defended its dealings Friday with Russia as “normal economic and trade cooperation” after a U.S. intelligence report said Beijing possibly provided equipment used in Ukraine that might have military applications. 

The report cited Russian customs data that showed Chinese state-owned military contractors supplied drones, navigation equipment, fighter jet parts and other goods. 

The Biden administration has warned Beijing of unspecified consequences if it supports the Kremlin’s war effort. Last week’s report didn’t say whether any of the trade cited might trigger U.S. retaliation. 

Xi and Russian President Vladimir Putin declared before the February 2022 invasion that their governments had a “no-limits” friendship. Beijing has blocked efforts to censure Moscow in the United Nations and has repeated Russian justifications for the attack. 

China has “always opposed the use of civilian drones for military purposes,” the Ministry of Commerce said. “The moderate expansion of drone control by China this time is an important measure to demonstrate the responsibility of a responsible major country.” 

The Ukrainian government appealed to DJI in March 2022 to stop selling drones it said the Russian ministry was using to target missile attacks. DJI rejected claims it leaked data on Ukraine’s military positions to Russia. 

AM Radio Fights to Keep Its Spot on US Car Dashboards

The number of AM radio stations in the United States is dwindling. Over the decades, mainstream broadcasters have moved to the FM band — especially music stations — to take advantage of FM’s superior audio fidelity. Now, there is a new threat to America’s remaining 4,000 AM stations. Some automakers want to kick AM off their dashboard radios.

In Dimmitt, in the state of Texas, that has Nancy and Todd Whalen worried. For eight years, they’ve owned KDHN-AM 1470, on the air since 1963. The Whalens are heard live on the station’s morning show and are KDHN’s sole employees.

“We came here to Dimmitt and told people that we wanted to give them something to be proud of. And we feel like what we’ve done and what we continue to do is provide that, not just for Dimmitt but for all the small towns in the area that no longer have local radio stations,” Nancy said.

KDHN, known as “The Twister,” also has received a Federal Communications Commission license for an FM (frequency modulation) translator, limited to 250 watts, which simulcasts the AM (amplitude modulation) signal. But the 500-watt AM signal covers more territory — about a 160-kilometer (99-mile) radius — compared with the 30-kilometer (19-mile) reach of the FM signal.

“The AM radio station is everything for us,” Nancy Whalen said. “We just turned on the FM translator, it’ll be two years in September. But the AM signal has been our bread and butter since the beginning.”

Where the profit is

Some urban station owners have decided it is more profitable to sell the real estate on which their antenna towers sit rather than continue to try to make money from commercials targeting a dwindling audience. That is what happened to KDWN in Las Vegas, Nevada, which was authorized by the FCC to transmit the maximum 50,000 watts allowed for AM stations. Corporate owner Audacy sold its 15-hectare (37-acre) transmission site on desert land last year to a real estate developer for $40 million and then switched off the powerful AM station, which had listeners across the entire Western U.S. at night.

Unlike FM band stations, which are limited to line-of-sight reception by the laws of physics, lower-frequency AM signals bounce off the ionosphere after sunset, giving them a range of hundreds and sometimes thousands of kilometers. FM stations have a greater audio frequency range, as they are allowed a wider bandwidth compared with AM stations. The most popular formats for the remaining AM stations in the United States are news/talk programming and sports, followed by country music.

Todd Whalen said audio quality is not an issue for his KDHN listeners.

“Our AM signal actually sounds as good as an FM signal because we have a state-of-the-art transmitter and processing,” he explained.

Recently, some major auto manufacturers announced plans to stop including AM radios in new vehicles, contending electric vehicle motor systems cause interference with reception, making stations unlistenable and, thus, the AM band obsolete.

Legislative response

Broadcasters and lawmakers object.

U.S. Senator Amy Klobuchar, a Minnesota Democrat, posted a video to Twitter about legislation she co-sponsored that would require vehicle manufacturers to include AM receivers in all new vehicles.

The Senate Committee on Commerce, Science, and Transportation approved via voice vote Thursday the “AM For Every Vehicle Act,” sending it to the Senate floor for consideration.

“Maybe people don’t understand how rural works, but a lot of people drive long distances to get to their town, to visit their friends,” Klobuchar said in her online video. She added she did not think auto manufacturers “understand how important AM radio is to people today.”

People like Rodney Hunter, who manages two grain silo sites in Tulia and Edmonson, Texas, said news on AM radio about corn, cotton, wheat and cattle are critical.

“I’ve had at least three farmers that called in today and said they heard on the radio that the markets are up. And without AM radio that would not be possible,” he told VOA on a recent morning at the grain silo in Tulia when a halt to grain shipments from Ukraine was causing a surge in prices of some agricultural commodities.

“Farmers are in their pickups or in their tractors, and they’re going up and down the road,” Hunter said. Relying on AM radio reception in vehicles “is just a lot handier” than trying to get crop-related news online.

Different languages

A five-hour drive southeast of Tulia found Joann Whang, in Carrollton, tuned in to another AM station. She’s not a farmer, but a pharmacist — listening to Korean-language KKDA-AM 730.

“My friend told me about it,” she said. “At first, I thought a Korean radio station is usually for the older generation, but it was actually pretty interesting. You can get all the information and highlights and even K-pop [music].”

The station is owned by the DK Media Group, which also publishes two Korean-language weekly newspapers in the Dallas-Fort Worth area. The company’s president, Stephanie Min Kim, said having no AM radios in new cars would imperil ethnic broadcasters who cannot afford the limited and more lucrative FM licenses.

“We feel that it is our duty to help and support our Korean immigrants integrate into American society,” said Kim, a former broadcaster at KBS in South Korea. “So, we invite experts from the law, health care and education to provide practical and useful information” over the station’s airwaves.

“More than 40% of radio listening is done in the car,” Kim said. “So, I think AM radio is facing a potential existential threat.”

That existential threat also affects another Dallas-area station — KHSE at 700 on the AM dial.

The station, known as Radio Caravan, with announcers speaking in Hindi, Tamil, English and other languages, plays South Asian music and provides information about community events.

While Radio Caravan also simulcasts on FM from a site 50 kilometers (31 miles) north of Dallas, that transmission does not have the reach of the 1,500-watt AM station whose transmitter and antenna array are located at a different site, also about 50 kilometers northeast of downtown Dallas.

“I don’t think AM can ever go away,” said Radio Caravan program host Aparna Ragnan, who suggested that auto manufacturers find a way to minimize the noise interference in electric vehicles instead of stopping installation of AM receivers in new cars and trucks.

Content is key

The inferior audio range of AM is not really an issue, said Radio Caravan’s station manager, Vaibhav Sheth.

“It’s the content that matters,” according to Sheth, who also noted that AM stations are a critical link for the alerts sent by the nationwide Emergency Alert System.

“Those sirens go off and your regular programming is interrupted, and when there’s an emergency, whether it’s a tornado warning, whether it’s a child abduction, whatever it is that’s happening, it goes to the AM frequency,” he said.

Some radio stations, including those struggling with personnel costs to fill 24 hours of programming, are beginning to use artificial intelligence, known as AI, which can grab real-time information, such as weather forecasts and sports scores, and use cloned announcer voices to make the computer-generated content sound live.

Kim at DK Media Group said AI might be valuable for some content, such as commercials, but she did not see it replacing empathetic voices interacting with the community in live programming.

“We are human beings,” Kim said.

The Whalens said they have not considered AI, even though they could use extra help at their “mom ‘n’ pop”-style station, which also broadcasts some local high school sports.

“We like being live in the studio. There’s just a different energy and a different feel,” said Nancy Whalen. “I think people listening can tell that over the radio. Artificial Intelligence is just that, and it’s not going to give the listener what they’re really looking for.”

Her husband, Todd, agreed. “We don’t want to be a canned radio station, because there’s a lot of canned stations out there.”

AM Radio Fights to Keep Its Spot on US Car Dashboards

There has been a steady decline in the number of AM radio stations in the United States. Over the decades, urban and mainstream broadcasters have moved to the FM band, which has better audio fidelity, although more limited range. Now, there is a new threat to the remaining AM stations. Some automakers want to kick AM off their dashboard radios, deeming it obsolete. VOA’s chief national correspondent, Steve Herman, in the state of Texas, has been tuning in to some traditional rural stations, as well as those broadcasting in languages others than English in the big cities. Camera – Steve Herman and Jonathan Zizzo.

FBI Warns About China Theft of US AI Technology

China is pilfering U.S.-developed artificial intelligence (AI) technology to enhance its own aspirations and to conduct foreign influence operations, senior FBI officials said Friday.

The officials said China and other U.S. adversaries are targeting American businesses, universities and government research facilities to get their hands on cutting-edge AI research and products.

“Nation-state adversaries, particularly China, pose a significant threat to American companies and national security by stealing our AI technology and data to advance their own AI programs and enable foreign influence campaigns,” a senior FBI official said during a background briefing call with reporters.

China has a national plan to surpass the U.S. as the world’s top AI power by 2030, but U.S. officials say much of its progress is based on stolen or otherwise acquired U.S. technology.

“What we’re seeing is efforts across multiple vectors, across multiple industries, across multiple avenues to try to solicit and acquire U.S. technology … to be able to re-create and develop and advance their AI programs,” the senior FBI official said.

The briefing was aimed at giving the FBI’s view of the threat landscape, not to react to any recent events, officials said.

FBI Director Christopher Wray sounded the alarm about China’s AI intentions at a cybersecurity summit in Atlanta on Wednesday. He warned that after “years stealing both our innovation and massive troves of data,” the Chinese are well-positioned “to use the fruits of their widespread hacking to power, with AI, even more powerful hacking efforts.”

China has denied the allegations.

The senior FBI official briefing reporters said that while the bureau remains focused on foreign acquisition of U.S. AI technology and talent, it has concern about future threats from foreign adversaries who exploit that technology.

“However, if and when the technology is acquired, their ability to deploy it in an instance such as [the 2024 presidential election] is something that we are concerned about and do closely monitor.”

With the recent surge in AI use, the U.S. government is grappling with its benefits and risks. At a White House summit earlier this month, top AI executives agreed to institute guidelines to ensure the technology is developed safely.

Even as the technology evolves, cybercriminals are actively using AI in a variety of ways, from creating malicious code to crafting convincing phishing emails and carrying out insider trading of securities, officials said.

“The bulk of the caseload that we’re seeing now and the scope of activity has in large part been on criminal actor use and deployment of AI models in furtherance of their traditional criminal schemes,” the senior FBI official said.

The FBI warned that violent extremists and traditional terrorist actors are experimenting with the use of various AI tools to build explosives, he said.

“Some have gone as far as to post information about their engagements with the AI models and the success which they’ve had defeating security measures in most cases or in a number of cases,” he said.

The FBI has observed a wave of fake AI-generated websites with millions of followers that carry malware to trick unsuspecting users, he said. The bureau is investigating the websites.

Wray cited a recent case in which a Dark Net user created malicious code using ChatGPT.

The user “then instructed other cybercriminals on how to use it to re-create malware strains and techniques based on common variants,” Wray said.

“And that’s really just the tip of the iceberg,” he said. “We assess that AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable and scalable capabilities — and it’s not going to take them long to do it.”

Prospect of AI Producing News Articles Concerns Digital Experts 

Google’s work developing an artificial intelligence tool that would produce news articles is concerning some digital experts, who say such devices risk inadvertently spreading propaganda or threatening source safety. 

The New York Times reported last week that Google is testing a new product, known internally by the working title Genesis, that employs artificial intelligence, or AI, to produce news articles.

Genesis can take in information, like details about current events, and create news content, the Times reported. Google already has pitched the product to the Times and other organizations, The Washington Post and News Corp, which owns The Wall Street Journal.

The launch of the generative AI chatbot ChatGPT last fall has sparked debate about how artificial intelligence can and should fit into the world — including in the news industry.

AI tools can help reporters research by quickly analyzing data and extracting it from PDF files in a process known as scraping.  AI can also help journalists’ fact-check sources. 

But the apprehension — including potentially spreading propaganda or ignoring the nuance humans bring to reporting — appears to be weightier. These worries extend beyond Google’s Genesis tool to encapsulate the use of AI in news gathering more broadly.

If AI-produced articles are not carefully checked, they could unwittingly include disinformation or misinformation, according to John Scott-Railton, who researches disinformation at the Citizen Lab in Toronto.  

“It’s sort of a shame that the places that are the most friction-free for AI to scrape and draw from — non-paywalled content — are the places where disinformation and propaganda get targeted,” Scott-Railton told VOA. “Getting people out of the loop does not make spotting disinformation easier.”

Paul M. Barrett, deputy director at New York University’s Stern Center for Business and Human Rights, agrees that artificial intelligence can turbocharge the dissemination of falsehoods. 

“It’s going to be easier to generate myths and disinformation,” he told VOA. “The supply of misleading content is, I think, going to go up.”

In an emailed statement to VOA, a Google spokesperson said, “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help their journalists with their work.”

“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

The implications for a news outlet’s credibility are another important consideration regarding the use of artificial intelligence.

News outlets are presently struggling with a credibility crisis. Half of Americans believe that national news outlets try to mislead or misinform audiences through their reporting, according to a February report from Gallup and the Knight Foundation.

“I’m puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool, with a much shakier command of facts, into newsrooms,” said Scott-Railton, who was previously a Google Ideas fellow.

Reports show that AI chatbots regularly produce responses that are entirely wrong or made up. AI researchers refer to this habit as a “hallucination.”

Digital experts are also cautious about what security risks may be posed by using AI tools to produce news articles. Anonymous sources, for instance, might face retaliation if their identities are revealed.

“All users of AI-powered systems need to be very conscious of what information they are providing to the system,” Barrett said.

“The journalist would have to be cautious and wary of disclosing to these AI systems information such as the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn’t become public,” he said. 

Scott-Railton said he thinks AI probably has a future in most industries, but it’s important not to rush the process, especially in news. 

“What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts,” he said.  

Vietnam Orders Social Media Firms to Cut ‘Toxic’ Content Using AI

HO CHI MINH CITY, VIETNAM – Vietnam’s demand that international social media firms use artificial intelligence to identify and remove “toxic” online content is part of an ever expanding and alarming campaign to pressure overseas platforms to suppress freedom of speech in the country, rights groups, experts and activists say.

Vietnam is a lucrative market for overseas social media platforms. Of the country’s population of nearly 100 million there are 75.6 million Facebook users, according to Singapore-based research firm Data Reportal. And since Vietnamese authorities have rolled out tighter restrictions on online content and ordered social media firms to remove content the government deems anti-state, social media sites have largely complied with government demands to silence online critiques of the government, experts and rights groups told VOA.

“Toxic” is a term used broadly to refer to online content which the state deems to be false, violent, offensive, or anti-state, according to local media reports.

During a mid-year review conference on June 30, Vietnam’s Information Ministry ordered international tech firms to use artificial intelligence to find and remove so-called toxic content automatically, according to a report from state-run broadcaster Vietnam Television. Details have not been revealed on how or when companies must comply with the new order.

Le Quang Tu Do, the head of the Authority of Broadcasting and Electronic Information, had noted during an April 6 news conference that Vietnamese authorities have economic, technical and diplomatic tools to act against international platforms, according to a local media report. According to the report he said the government could cut off social platforms from advertisers, banks, and e-commerce, block domains and servers, and advise the public to cease using platforms with toxic content.

“The point of these measures is for international platforms without offices in Vietnam, like Facebook and YouTube, to abide by the law,” Do said.

Pat de Brun, Amnesty International’s deputy director of Amnesty Tech, told VOA the latest demand is consistent with Vietnam’s yearslong strategy to increase pressure on social media companies. De Brun said it is the government’s broad definition of what is toxic, rather than use of artificial intelligence, that is of most human rights concern because it silences speech that can include criticism of government and policies.

“Vietnamese authorities have used exceptionally broad categories to determine content that they find inappropriate and which they seek to censor. … Very, very often this content is protected speech under international human rights law,” de Brun said. “It’s really alarming to see that these companies have relented in the face of this pressure again and again.”

During the first half of this year, Facebook removed 2,549 posts, YouTube removed 6,101 videos, and TikTok took down 415 links, according to an Information Ministry statement.

Online suppression

Nguyen Khac Giang, a research fellow at Singapore’s ISEAS-Yusof Ishak Institute, told VOA that heightened online censorship has been led by the conservative faction within Vietnam’s Communist Party, which gained power in 2016.

Nguyen Phu Trong was elected as general secretary in 2016, putting a conservative in the top position within the one-party state. Along with Trong, other conservative-minded leaders rose within government the same year, pushing out reformists, Giang said. Efforts to control the online sphere led to 2018’s Law on Cybersecurity, which expands government control of online content and attempts to localize user data in Vietnam. The government also established Force 47 in 2017, a military unit with reportedly 10,000 members assigned to monitor online space.

On July 19, local media reported that the information ministry proposed taking away the internet access of people who commit violations online especially via livestream on social media sites.

Activists often see their posts removed, lose access to their accounts, and the government also arrests Vietnamese bloggers, journalists, and critics living in the country for their online speech. They are often charged under Article 117 of Vietnam’s Criminal Code, which criminalizes “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam.”

According to The 88 Project, a U.S.-based human rights group, 191 activists are in jail in Vietnam, many of whom have been arrested for online advocacy and charged under Article 117.

“If you look at the way that social media is controlled in Vietnam, it is very starkly contrasted with what happened before 2016,” Giang said. “What we are seeing now is only a signal of what we’ve been seeing for a long time.”

Giang said the government order is a tool to pressure social media companies to use artificial intelligence to limit content, but he warned that online censorship and limits on public discussion could cause political instability by eliminating a channel for public feedback.

“The story here is that they want the social media platforms to take more responsibility for whatever happens on social media in Vietnam,” Giang said. “If they don’t allow people to report on wrongdoings … how can the [government] know about it?”

Vietnamese singer and dissident Do Nguyen Mai Khoi, now living in the United States, has been contacting Facebook since 2018 for activists who have lost accounts or had posts censored, or are the victims of coordinated online attacks by pro-government Facebook users. Although she has received some help from the company in the past, responses to her requests have become more infrequent.

“[Facebook] should use their leverage,” she added. “If Vietnam closed Facebook, everyone would get angry and there’d be a big wave of revolution or protests.”

Representatives of Meta Platforms Inc., Facebook’s parent company, did not respond to VOA requests for comment.

Vietnam is also a top concern in the region for the harsh punishment of online speech, Dhevy Sivaprakasam, Asia Pacific policy counsel at Access Now, a nonprofit defending digital rights, said.

“I think it’s one of the most egregious examples of persecution on the online space,” she said.

Ambassador: China Will Respond in Kind to US Chip Export Restrictions 

If the United States imposes more investment restrictions and export controls on China’s semiconductor industry, Beijing will respond in kind, according to China’s ambassador to the U.S., Xie Feng, whose tough talk analysts see as the latest response from a so-called wolf-warrior diplomat.

Xie likened the U.S. export controls to “restricting their opponents to only wearing old swimsuits in swimming competitions, while they themselves can wear advanced shark swimsuits.”

Xie’s remarks, made at the Aspen Security Forum last week, came as the U.S. finalized its mechanism for vetting possible investments in China’s cutting-edge technology. These include semiconductors, quantum computing and artificial intelligence, all of which have military as well as commercial applications.

The U.S. Department of Commerce is also considering imposing new restrictions on exports of artificial intelligence (AI) chips to China, despite the objections of U.S. chipmakers.

Wen-Chih Chao, of the Institute of Strategic and International Affairs Studies at Taiwan’s National Chung Cheng University, characterized Xie’s remarks as part of China’s “wolf-warrior” diplomacy, as China’s increasingly assertive style of foreign policy has come to be known. 

He said the threatened Chinese countermeasures would depend on whether Beijing just wants to show an “attitude” or has decided to confront Western countries head-on.

He pointed to China’s investigations of some U.S. companies operating in China. He sees these as China retaliating by “expressing an attitude.”

Getting tougher

But as the tit-for-tat moves of the U.S. and China seem to be “escalating,” Chao pointed to China’s retaliation getting tougher.

An example, he said, is the export controls Beijing slapped on exporters of gallium, germanium and other raw minerals used in high-end chip manufacturing. As of August 1, they must apply for permission from the Ministry of Commerce of China and report the details of overseas buyers.

Chao said China might go further by blocking or limiting the supply of batteries for electric vehicles, mechanical components needed for wind-power generation, gases needed for solar panels, and raw materials needed for pharmaceuticals and semiconductor manufacturing.

China wants to show Western countries that they must think twice when imposing sanctions on Chinese semiconductors or companies, he said.

But other analysts said Beijing does not want to escalate its retaliation to the point where further moves by the U.S. and its allies harm China’s economy, which is only slowly recovering from draconian pandemic lockdowns.

No cooperation

Chao also said China could retaliate by refusing to cooperate on efforts to limit climate change, or by saying “no” when asked to use its influence with Pyongyang to lessen tensions on the Korean Peninsula.

“These are the means China can use to retaliate,” Chao said. “I think there are a lot of them. These may be its current bargaining chips, and it will not use them all simultaneously. It will see how the West reacts. It may show its ability to counter the West step by step.”

Cheng Chen, a political science professor at the State University of New York at Albany, said China’s recent announcement about gallium, germanium and other chipmaking metals is a warning of its ability, and willingness, to retaliate against the U.S.

Even if the U.S. invests heavily in reshaping these industrial chains, it will take a long time to assemble the links, she said.

Chen said that if the U.S. further escalates sanctions on China’s high-tech products, China could retaliate in kind — using tariffs for tariffs, sanctions for sanctions, and regulations for regulations.

Most used strategy

Yang Yikui, an assistant researcher at Taiwan National Defense Security Research Institute, said economic coercion is China’s most commonly used retaliatory tactic.

He said China imposed trade sanctions on salmon imported from Norway when the late pro-democracy activist Liu Xiaobo was awarded the Nobel Peace Prize in 2010. Beijing tightened restrictions on imports of Philippine bananas, citing quarantine issues, during a 2012 maritime dispute with Manila over a shoal in the South China Sea.

Yang said studies show that since 2018, China’s sanctions have become more diverse and detailed, allowing it to retaliate directly and indirectly. It can also use its economic and trade relations to force companies from other countries to participate.

Yang said that after Lithuania agreed in 2021 to let Taiwan establish a representative office in Vilnius, China downgraded its diplomatic relations from the ambassadorial level to the charge d’affaires and removed the country from its customs system database, making it impossible for Lithuanian goods to pass customs.

Beijing then reduced the credit lines of Lithuanian companies operating in the Chinese market and forced other multinational companies to sanction Lithuania. Companies in Germany, France, Sweden and other countries reportedly had cargos stopped at Chinese ports because they contained products made in Lithuania. 

When Australia investigated the origins of COVID, an upset China imposed tariffs or import bans on Australian beef, wine, cotton, timber, lobster, coal and barley. But Beijing did not sanction Australia’s iron ore, wool and natural gas because sanctions on those products stood to hurt key Chinese sectors. 

Adrianna Zhang contributed to this report.

US Works With Artificial Intelligence Companies to Mitigate Risks

Can artificial intelligence wipe out humanity?

A senior U.S. official said the United States government is working with leading AI companies and at least 20 countries to set up guardrails to mitigate potential risks, while focusing on the innovative edge of AI technologies.

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy, spoke Tuesday to VOA about the voluntary commitments from leading AI companies to ensure safety and transparency around AI development.

One of the popular generative AI platforms is ChatGPT, which is not accessible in China. If a user asked it politically sensitive questions in Mandarin Chinese such as, “What is the 1989 Tiananmen Square Massacre?” the user would get information that is heavily censored by the Beijing government.

But ChatGPT, created by U.S.-based OpenAI, is not available in China.

China has finalized rules governing its own generative AI. The new regulation will be effective August 15. Chinese chatbots reportedly have built-in censorship to avoid sensitive keywords.

“I think that the development of these systems actually requires a foundation of openness, of interoperability, of reliability of data. And an authoritarian top-down approach that controls the flow of information over time will undermine a government’s ability, a company’s ability, to sustain an innovative edge in AI,” Fick told VOA.

The following excerpts from the interview have been edited for brevity and clarity.

VOA: Seven leading AI companies made eight promises about what they will do with their technology. What do these commitments actually mean?

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy: As we think about governance of this new tech frontier of artificial intelligence, our North Star ought to be preserving our innovative edge and ensuring that we can continue to maintain a global leadership position in the development of robust AI tools, because the upside to solve shared challenges around the world is so immense. …

These commitments fall into three broad categories. First, the companies have a duty to ensure that their products are safe. … Second, the companies have a responsibility to ensure that their products are secure. … Third, the companies have a duty to ensure that their products gain the trust of people around the world. And so, we need a way for viewers, consumers, to ascertain whether audio content or visual content is AI-generated or not, whether it is authentic or not. And that’s what these commitments do.

VOA: Would the United States government fund some of these types of safety tests conducted by those companies?

Fick: The United States government has a huge interest in ensuring that these companies, these models, their products are safe, are secure, and are trustworthy. We look forward to partnering with these companies over time to do that. And of course, that could certainly include financial partnership.

VOA: The White House has listed cancer prevention and mitigating climate change as two of the areas where it would like AI companies to focus their efforts. Can you talk about U.S. competition with China on AI? Is that an administration priority?

Fick: We would expect the Chinese approach to artificial intelligence to look very much like the PRC’s [People’s Republic of China] approach to other areas of technology. Generally, top down. Generally, not focused on open expression, not focused on open access to information. And these AI systems, by their very definition, require that sort of openness and that sort of access to large data sets and information.

VOA: Some industry experts have warned that China is spending three times as much as the U.S. to become the world’s AI leader. Can you talk about China’s ambition on AI? Is the U.S. keeping up with the competition?

Fick: We certainly track things like R&D [research and development] and investment dollars, but I would make the point that those are inputs, not outputs. And I don’t think it’s any accident that the leading companies in AI research are American companies. Our innovation ecosystem, supported by foundational research and immigration policy that attracts the world’s best talent, tax and regulatory policies that encourage business creation and growth.

VOA: Any final thoughts about the risks? Can AI models be used to develop bioweapons? Can AI wipe out humanity?

Fick: My experience has been that risk and return really are correlated in life and in financial markets. There’s huge reward and promise in these technologies and of course, at the same time, they bring with them significant risks. We need to maintain our North Star, our focus on that innovative edge and all of the promise that these technologies bring in. At the same time, it’s our responsibility as governments and as responsible companies leading in this space to put the guardrails in place to mitigate those risks.

Elon Musk Reveals New Black and White X Logo To Replace Twitter’s Blue Bird

Elon Musk has unveiled a new black and white “X” logo to replace Twitter’s famous blue bird as he follows through with a major rebranding of the social media platform he bought for $44 billion last year.

Musk replaced his own Twitter icon with a white X on a black background and posted a picture on Monday of the design projected on Twitter’s San Francisco headquarters.

The X started appearing on the top of the desktop version of Twitter on Monday, but the bird was still dominant across the phone app.

Musk had asked fans for logo ideas and chose one, which he described as minimalist Art Deco, saying it “certainly will be refined.”

“And soon we shall bid adieu to the twitter brand and, gradually, all the birds,” Musk tweeted Sunday.

The X.com web domain now redirects users to Twitter.com, Musk said.

In response to questions about what tweets would be called when the rebranding is done, Musk said they would be called Xs.

Musk, CEO of Tesla, has long been fascinated with the letter. The billionaire is also CEO of rocket company Space Exploration Technologies Corp., commonly known as SpaceX. And in 1999, he founded a startup called X.com, an online financial services company now known as PayPal,

He calls his son with the singer Grimes, whose actual name is a collection of letters and symbols, “X.”

Musk’s Twitter purchase and rebranding are part of his strategy to create what he’s dubbed an ” everything app ” similar to China’s WeChat, which combines video chats, messaging, streaming and payments.

Linda Yaccarino, the longtime NBC Universal executive Musk tapped to be Twitter CEO in May, posted the new logo and weighed in on the change, writing on Twitter that X would be “the future state of unlimited interactivity — centered in audio, video, messaging, payments/banking — creating a global marketplace for ideas, goods, services, and opportunities.”

Experts, however, predicted the new name will confuse much of Twitter’s audience, which has already been souring on the social media platform following a raft of Musk’s other changes. The site also faces new competition from Threads, the new app by Facebook and Instagram parent Meta that directly targets Twitter users.

Elon Musk Says Twitter to Change Logo, Adieu to ‘All the Birds’

Elon Musk said on Sunday he was looking to change Twitter’s logo, tweeting: “And soon we shall bid adieu to the twitter brand and, gradually, all the birds.”

In a post on the site at 12:06 a.m. ET (0406 GMT), the social media platform’s billionaire owner added: “If a good enough X logo is posted tonight, we’ll make (it) go live worldwide tomorrow.”

Musk posted an image of a flickering “X,” and later in a Twitter Spaces audio chat replied “Yes” when asked if the Twitter logo will change, adding that “it should have been done a long time ago.”

Under Musk’s tumultuous tenure since he bought Twitter in October, the company has changed its business name to X Corp, reflecting the billionaire’s vision to create a “super app” like China’s WeChat.

The company did not immediately respond to a request for comment.

Twitter’s website says its logo, depicting a blue bird, is “our most recognizable asset.” “That’s why we’re so protective of it,” it added.

The bird was temporarily replaced in April by Dogecoin’s Shiba Inu dog, helping drive a surge in the meme coin’s market value.

The company came under widespread criticism from users and marketing professionals when Musk announced early this month that Twitter would limit how many tweets per day various accounts can read.

The daily limits helped in the growth of Meta-owned rival service Threads, which crossed 100 million sign-ups within five days of launch.

Twitter’s most recent complication was a lawsuit filed on Tuesday claiming the firm owes at least $500 million in severance pay to former employees. Since Musk acquired it, the company has laid off more than half its workforce to cut costs.

AI Firms Strike Deal With White House on Safety Guidelines 

The White House on Friday announced that the Biden administration had reached a voluntary agreement with seven companies building artificial intelligence products to establish guidelines meant to ensure the technology is developed safely.

“These commitments are real, and they’re concrete,” President Joe Biden said in comments to reporters. “They’re going to help … the industry fulfill its fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and our shared values.”

The companies that sent leaders to the White House were Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. The firms are all developing systems called large language models (LLMs), which are trained using vast amounts of text, usually taken from the publicly accessible internet, and use predictive analysis to respond to queries conversationally.

In a statement, OpenAI, which created the popular ChatGPT service, said, “This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the U.S. and around the world.”

Safety, security, trust

The agreement, released by the White House on Friday morning, outlines three broad areas of focus: assuring that AI products are safe for public use before they are made widely available; building products that are secure and cannot be misused for unintended purposes; and establishing public trust that the companies developing the technology are transparent about how they work and what information they gather.

As part of the agreement, the companies pledged to conduct internal and external security testing before AI systems are made public in order to ensure they are safe for public use, and to share information about safety and security with the public.

Further, the commitment obliges the companies to keep strong safeguards in place to prevent the inadvertent or malicious release of technology and tools not intended for the general public, and to support third-party efforts to detect and expose any such breaches.

Finally, the agreement sets out a series of obligations meant to build public trust. These include assurances that AI-created content will always be identified as such; that companies will offer clear information about their products’ capabilities and limitations; that companies will prioritize mitigating the risk of potential harms of AI, including bias, discrimination and privacy violations; and that companies will focus their research on using AI to “help address society’s greatest challenges.”

The administration said that it is at work on an executive order that would ask Congress to develop legislation to “help America lead the way in responsible innovation.”

Just a start

Experts contacted by VOA all said that the agreement marked a positive step on the road toward effective regulation of emerging AI technology, but they also warned that there is far more work to be done, both in understanding the potential harm these powerful models might cause and finding ways to mitigate it.

“No one knows how to regulate AI — it’s very complex and is constantly changing,” said Susan Ariel Aaronson, a professor at George Washington University and the founder and director of the research institute Digital Trade and Data Governance Hub.

“The White House is trying very hard to regulate in a pro-innovative way,” Aaronson told VOA. “When you regulate, you always want to balance risk — protecting people or businesses from harm — with encouraging innovation, and this industry is essential for U.S. economic growth.”

She added, “The United States is trying and so I want to laud the White House for these efforts. But I want to be honest. Is it sufficient? No.”

‘Conversational computing’

It’s important to get this right, because models like ChatGPT, Google’s Bard and Anthropic’s Claude will increasingly be built into the systems that people use to go about their everyday business, said Louis Rosenberg, the CEO and chief scientist of the firm Unanimous AI. 

“We’re going into an age of conversational computing, where we’re going to talk to our computers and our computers are going to talk back,” Rosenberg told VOA. “That’s how we’re going to engage search engines. That’s how we’re going to engage apps. That’s how we’re going to engage productivity tools.”

Rosenberg, who has worked in the AI field for 30 years and holds hundreds of related patents, said that when it comes to LLMs being so tightly integrated into our day-to-day life, we still don’t know everything we should be concerned about.

“Many of the risks are not fully understood yet,” he said. Conventional computer software is very deterministic, he said, meaning that programs are built to do precisely what programmers tell them to do. By contrast, the exact way in which large language models operate can be opaque even to their creators.

The models can display unintended bias, can parrot false or misleading information, and can say things that people find offensive or even dangerous. In addition, many people will interact with them through a third-party service, such as a website, that integrates the large language model into its offering, but can tailor its responses in ways that might be malicious or manipulative.

Many of these problems will become apparent only after these systems have been deployed at scale, by which point they will already be in use by the public.

“The problems have not yet surfaced at a level where policymakers can address them head-on,” Rosenberg said. “The thing that is, I think, positive, is that at least policymakers are expecting the problems.”

More stakeholders needed 

Benjamin Boudreaux, a policy analyst with the RAND Corporation, told VOA that it was unclear how much actual change in the companies’ behavior Friday’s agreement would generate.

“Many of the things that the companies are agreeing to here are things that the companies already do, so it’s not clear that this agreement really shifts much of their behavior,” Boudreaux said. “And so I think there is still going to be a need for perhaps a more regulatory approach or more action from Congress and the White House.”

Boudreaux also said that as the administration fleshes out its policy, it will have to broaden the range of participants in the conversation.

“This is just a group of private sector entities; this doesn’t include the full set of stakeholders that need to be involved in discussions about the risks of these systems,” he said. “The stakeholders left out of this include some of the independent evaluators, civil society organizations, nonprofit groups and the like, that would actually do some of the risk analysis and risk assessment.”

Japan Signs Chip Development Deal With India 

Japan and India have signed an agreement for the joint development of semiconductors, in what appears to be another indication of how global businesses are reconfiguring post-pandemic supply chains as China loses its allure for foreign companies.

India’s Ashwini Vaishnaw, minister for railways, communications, and electronics and information technology, and Japan’s minister of economy, trade and industry, Yasutoshi Nishimura, signed the deal Thursday in New Delhi.

The memorandum covers “semiconductor design, manufacturing, equipment research, talent development and [will] bring resilience in the semiconductor supply chain,” Vaishnaw said.

Nishimura said after his meeting with Vaishnaw that “India has excellent human resources” in fields such as semiconductor design.

“By capitalizing on each other’s strengths, we want to push forward with concrete projects as early as possible,” Nishimura told a news conference, Kyodo News reported.  

Andreas Kuehn, a senior fellow at the American office of Observer Research Foundation, an Indian think tank, told VOA Mandarin: “Japan has extensive experience in this industry and understands the infrastructure in this field at a broad level. It can be an important partner in advancing India’s semiconductor ambitions.”

Shift from China

Foreign companies have been shifting their manufacturing away from China over the past decade, prompted by increasing labor costs.

More recently, Beijing’s push for foreign companies to share their technologies and data has increased uneasiness with China’s business climate, according to surveys of U.S. and European businesses there.

The discomfort stems from a 2021 data security law that Beijing updated in April and put into effect on July 1. Its broad anti-espionage language does not define what falls under China’s national security or interests. 

After taking office in 2014, Indian Prime Minister Narendra Modi launched a “Make in India” initiative with the goal of turning India into a global manufacturing center with an expanded chip industry.

The initiative is not entirely about making India a self-sufficient economy, but more about welcoming investors from countries with similar ideas. Japan and India are part of the QUAD security framework, along with the United States and Australia, which aims to strengthen cooperation as a group, as well as bilaterally between members, to maintain peace and stability in the region.

Jagannath Panda, director of the Stockholm Center for South Asian and Indo-Pacific Affairs of the Institute for Security and Development Policy, said that the international community “wants a safe region where the semiconductor industry can continue to supply the global market. This chain of linkages is critical, and India is at the heart of the Indo-Pacific region” — a location not lost on chip companies in the United States, Taiwan and Japan that are reevaluating supply chain security and reducing their dependence on China.

Looking ahead

Panda told VOA Mandarin: “The COVID pandemic has proved that we should not rely too much on China. [India’s development of the chip industry] is also to prepare India for the next half century. Unless countries with similar ideas such as the United States and Japan cooperate effectively, India cannot really develop its semiconductor industry.”

New Delhi and Washington signed a memorandum of understanding in March to advance cooperation in the semiconductor field.

During Modi’s visit to the United States in June, he and President Joe Biden announced a cooperation agreement to coordinate semiconductor incentive and subsidy plans between the two countries.

Micron, a major chip manufacturer, confirmed on June 22 that it will invest as much as $800 million in India to build a chip assembly and testing plant.

Applied Materials said in June that it plans to invest $400 million over four years to build an engineering center in Bangalore, Reuters reported.  The new center is expected to be located near the company’s existing facility in Bengaluru and is likely to support more than $2 billion of planned investments and create 500 new advanced engineering jobs, the company said.

Experts said that although the development of India’s chip industry will not pose a challenge to China in the short term, China’s increasingly unfriendly business environment will prompt international semiconductor companies to consider India as one of the destinations for transferring production capacity.

“China is still a big player in the semiconductor industry, especially traditional chips, and we shouldn’t underestimate that. I don’t think that’s going to go away anytime soon. The world depends on this capacity,” Kuehn said. 

He added: “For multinational companies, China has become a more difficult business environment to operate in. We are likely to see them make other investments outside China after a period of time, which may compete with China’s semiconductor industry, especially in Southeast Asia. India may also play a role in this regard.” 

Bo Gu contributed to this report.

US Tech Leaders Aim for Fewer Export Curbs on AI Chips for China 

Intel Corp. has introduced a processor in China that is designed for AI deep-learning applications despite reports of the Biden administration considering additional restrictions on Chinese companies to address loopholes in chip export controls.

The chip giant’s product launch on July 11 is part of an effort by U.S. technology companies to bypass or curb government export controls to the Chinese market as the U.S. government, citing national security concerns, continues to tighten restrictions on China’s artificial intelligence industry.

CEOs of U.S. chipmakers including Intel, Qualcomm and Nvidia met with U.S. Secretary of State Antony Blinken on Monday to urge a halt to more controls on chip exports to China, Reuters reported. Commerce Secretary Gina Raimondo, National Economic Council director Lael Brainard and White House national security adviser Jake Sullivan were among other government officials meeting with the CEOs, Reuters said.

The meeting came after China announced restrictions on the export of materials that are used to construct chips, a response to escalating efforts by Washington to curb China’s technological advances.

VOA Mandarin contacted the U.S. chipmakers for comment but has yet to receive responses.

Reuters reported Nvidia Chief Financial Officer Colette Kress said in June that “over the long term, restrictions prohibiting the sale of our data center graphic processing units to China, if implemented, would result in a permanent loss of opportunities for the U.S. industry to compete and lead in one of the world’s largest markets and impact on our future business and financial results.”

Before the meeting with Blinken, John Neuffer, president of the Semiconductor Industry Association, which represents the chip industry, said in a statement to The New York Times that the escalation of controls posed a significant risk to the global competitiveness of the U.S. industry.

“China is the world’s largest market for semiconductors, and our companies simply need to do business there to continue to grow, innovate and stay ahead of global competitors,” he said. “We urge solutions that protect national security, avoid inadvertent and lasting damage to the chip industry, and avert future escalations.”

According to the Times, citing five sources, the Biden administration is considering additional restrictions on the sale of high-end chips used to power artificial intelligence to China. The goal is to limit technological capacity that could aid the Chinese military while minimizing the impact such rules would have on private companies.   Such a move could speed up the tit-for-tat salvos in the U.S.-China chip war, the Times reported. 

And The Wall Street Journal reported last month that the White House was exploring how to restrict the leasing of cloud services to AI firms in China.

But the U.S. controls appear to be merely slowing, rather than stopping, China’s AI development.

Last October, the U.S. Commerce Department banned Nvidia from selling two of its most advanced AI-critical chips, the A100 and the newer H100, to Chinese customers, citing national security concerns. In November, Nvidia designed the A800 and H800 chips that are not subject to export controls for the Chinese market.

According to the Journal, the U.S. government is considering new bans on the A800 exports to China.

According to a report published in May by TrendForce, a market intelligence and professional consulting firm, the A800, like Nvidia’s H100 and A100, is already the most widely used mainstream product for AI-related computing.

Combining chips

Robert Atkinson, founder and president of the Information Technology and Innovation Foundation, told VOA in a phone interview that although these chips are not the most advanced, they can still be used by China.  

“What you can do, though, is you can combine lesser, less powerful chips and just put more of them together. And you can still do a lot of AI processing with them. It just makes it more expensive. And it uses more energy. But the Chinese are happy to do that,” Atkinson said.

As for the Chinese use of cloud computing, Hanna Dohmen, a research analyst at Georgetown’s Center for Security and Emerging Technology, told VOA Mandarin in a phone interview that companies can rent chips through cloud service providers.  

In practice, it is similar to a pedestrian hopping on an e-share scooter or bike — she pays a fee to unlock the scooter’s key function, its wheels.

For example, Dohman said that Nvidia’s A100, which is “controlled and cannot be exported to China, per the October 7 export control regulations,” can be legally accessed by Chinese companies that “purchase services from these cloud service providers to gain virtual access to these controlled chips.”

Dohman acknowledged it is not clear how many Chinese AI research institutions and companies are using American cloud services.

“There are also Chinese regulations … on cross-border data that might prohibit or limit to what extent Chinese companies might be willing to use foreign cloud service providers outside of China to develop their AI models,” she said.

Black market chips

In another workaround, Atkinson said Chinese companies can buy black market chips. “It’s not clear to me that these export controls are going to be able to completely cut off Chinese computing capabilities. They might slow them down a bit, but I don’t think they’re going to cut them off.”

According to an as yet unpublished report by the Information Technology and Innovation Foundation, China is already ahead of Europe in terms of the number of AI startups and is catching up with the U.S.

Although Chinese websites account for less than 2% of global network traffic, Atkinson said, Chinese government data management can make up for the lack of dialogue texts, images and videos that are essential for AI large-scale model training.

 “I do think that the Chinese will catch up and surpass the U.S. unless we take fairly serious steps,” Atkinson said.  

UN Security Council Debates Virtues, Failings of Artificial Intelligence

Artificial intelligence was the dominant topic at the United Nations Security Council this week.

In his opening remarks at the session, U.N. Secretary-General Antonio Guterres said, “AI will have an impact on every area of our lives” and advocated for the creation of a “new United Nations entity to support collective efforts to govern this extraordinary technology.”

Guterres said “the need for global standards and approaches makes the United Nations the ideal place for this to happen” and urged a joining of forces to “build trust for peace and security.”

“We need a race to develop AI for good,” Guterres said. “And that is a race that is possible and achievable.”

In his briefing, to the council, Guterres said the debate was an opportunity to consider the impact of artificial intelligence on peace and security “where it is already raising political, legal, ethical and humanitarian concerns.”

He also stated that while governments, large companies and organizations around the world are working on an AI strategy, “even its own designers have no idea where their stunning technological breakthrough may lead.”

Guterres urged the Security Council “to approach this technology with a sense of urgency, a global lens and a learner’s mindset, because what we have seen is just the beginning.”

AI for good and evil

The secretary-general’s remarks set the stage for a series of comments and observations by session participants on how artificial intelligence can benefit society in health, education and human rights, while recognizing that, gone unchecked, AI also has the potential to be used for nefarious purposes.

To that point, there was widespread acknowledgment that AI in every iteration of its development needs to be kept in check with specific guidelines, rules and regulations to protect privacy and ensure security without hindering innovation.

“We cannot leave the development of artificial intelligence solely to private sector actors,” said Jack Clark, co-founder of Anthropic, a leading AI company. “The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.”

AI as human labor

Yi Zeng, a professor at the Institute of Automation, Chinese Academy of Sciences, shared a similar sentiment.

“AI should never pretend to be human,” he said. “We should use generative AI to assist but never trust them to replace human decision-making.”

The U.K. holds the council’s rotating presidency this month and British Foreign Secretary James Cleverly, who chaired the session, called for international cooperation to manage the global implications of artificial intelligence. He said that “global cooperation will be vital to ensure AI technologies and the rules governing their use are developed responsibly in a way that benefits society.”

Cleverly noted how far the world has come “since the early development of artificial intelligence by pioneers like Alan Turing and Christopher Strachey.”

“This technology has advanced with ever greater speed, yet the biggest AI-induced transformations are still to come,” he said.

Making AI inclusive

“AI development is now outpacing at breakneck speed, and governments are unable to keep up,” said Omran Sharaf, assistant minister of foreign affairs and international cooperation for advanced science and technology, in the United Arab Emirates.

“It is time to be optimistic realists when it comes to AI” and to “harness the opportunities it offers,” he said.

Among the proposals he suggested was addressing real-world biases that AI could double down on.

“Decades of progress on the fight against discrimination, especially gender discrimination towards women and girls, as well as against persons with disabilities, will be undermined if we do not ensure an AI that is inclusive,” Sharaf said.

AI as double-edged sword

Zhang Jun, China’s permanent representative to the U.N., lauded the empowering role of AI in scientific research, health care and autonomous driving.

But he also acknowledged how it is raising concerns in areas such as data privacy, spreading false information, exacerbating social inequality, and its potential misuse or abuse by terrorists or extremist forces, “which will pose a significant threat to international peace and security.”

“Whether AI is used for good or evil depends on how mankind utilizes it, regulates it and how we balance scientific development with security,” he said.

U.S. envoy Jeffrey DeLaurentis said artificial intelligence offers great promise in addressing global challenges such as food security, education and medicine. He added, however, that AI also has the potential “to compound threats and intensify conflicts, including by spreading mis- and disinformation, amplifying bias and inequality, enhancing malicious cyber operations, and exacerbating human rights abuses.”

“We, therefore, welcome this discussion to understand how the council can find the right balance between maximizing AI’s benefits while mitigating its risks,” he said.

Britain’s Cleverly noted that since no country will be untouched by AI, “we must involve and engage the widest coalition of international actors from all sectors.” 

VOA’s Margaret Besheer contributed to this story.