Category Archives: Business

economy and business news

News Presenter Generated with AI Appears in Kuwait

A Kuwaiti media outlet has unveiled a virtual news presenter generated using artificial intelligence, with plans for it to read online bulletins.   

“Fedha” appeared on the Twitter account of the Kuwait News website Saturday as an image of a woman, her light-colored hair uncovered, wearing a black jacket and white T-shirt.   

“I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions,” she said in classical Arabic.   

The site is affiliated with the Kuwait Times, founded in 1961 as the Gulf region’s first English-language daily.   

Abdullah Boftain, deputy editor-in-chief for both outlets, said the move is a test of AI’s potential to offer “new and innovative content.”   

In the future Fedha could adopt the Kuwaiti accent and present news bulletins on the site’s Twitter account, which has 1.2 million followers, he said.   

“Fedha is a popular, old Kuwaiti name that refers to silver, the metal. We always imagine robots to be silver and metallic in color, so we combined the two,” Boftain said.    

The presenter’s blonde hair and light-colored eyes reflect the oil-rich country’s diverse population of Kuwaitis and expatriates, according to Boftain.    

“Fedha represents everyone,” he said.   

Her initial 13-second video generated a flood of reactions on social media, including from journalists. 

The rapid rise of AI globally has raised the promise of benefits, such as in health care and the elimination of mundane tasks, but also fears, for example over its potential to spread disinformation, threats to certain jobs and to artistic integrity.   

Kuwait ranked 158 out of 180 countries and territories in the Reporters Without Borders (RSF) 2022 Press Freedom Index. 

Reports: Tesla Plans Shanghai Factory for Power Storage

Electric car maker Tesla Inc. plans to build a factory in Shanghai to produce power-storage devices for sale worldwide, state media reported Sunday.

Plans call for annual production of 10,000 Megapack units, according to the Xinhua News Agency and state television. They said the company made the announcement at a signing ceremony in Shanghai, where Tesla operates an auto factory.

The factory is due to break ground in the third quarter of this year and start production in the second quarter of 2024, the reports said.

Tesla didn’t immediately respond to requests for information.

Mayor in Australia Ready to Sue over Alleged AI Chatbot Defamation

A mayor in Australia’s Victoria state said Friday he may sue the artificial intelligence writing tool ChatGPT after it falsely claimed he’d served time in prison for bribery.  Hepburn Shire Council Mayor Brian Hood was incorrectly identified as the guilty party in a corruption case in the early 2000s.

Brian Hood was the whistleblower in a corruption scandal involving a company partly owned by the Reserve Bank of Australia.  Several people were charged, but Hood was not one of them. That did not stop an article generated by ChatGPT, an automated writing service powered by artificial intelligence. The article cast him as the culprit who was jailed for his part in a conspiracy to bribe foreign officials to win currency printing contracts.

Hood only found out after friends told him. He told the Australian Broadcasting Corp. He then used the chatbot software to see the story for himself.

“After making the inquiry, it generated five or six paragraphs of information.  The really disturbing thing was that some of the paragraphs were accurate, and then were other paragraphs that described things that were completely incorrect.  It told me that I’d be charged with very serious criminal offenses, that I’d be convicted of them and that I had spent 30 months in jail,” he said.

Hood said that if OpenAI, a U.S.-based company that owns the chatbot, does not correct the false claims, he will sue.

It would be the first defamation lawsuit against the automated service.

However, a new version of ChatGPT reportedly avoids the mistakes of its predecessor. It reportedly correctly explains that Hood was a whistleblower who was praised for his actions. Hood’s lawyers say that the defamatory material, which damages the mayor’s reputation, still exists and their efforts to have the mistakes rectified would continue.

A disclaimer on the ChatGPT program warns users that it “may produce inaccurate information about people, places, or facts”.  The technology has exploded in popularity around the world.

OpenAI has yet to comment publicly on the allegations.

Google has announced the launch of its rival to ChatGPT, Bard.  Meta, which owns WhatsApp, Facebook and Instagram, launched its own AI chatbot Blenderbot in the United States last year, while Baidu, the Chinese tech company, has said it was working on an advanced version of its chatbot, Ernie.

Samsung Cutting Memory Chip Production as Profit Slides

Samsung Electronics said Friday it is cutting the production of its computer memory chips in an apparent effort to reduce inventory as it forecasted another quarter of sluggish profit. 

The South Korean technology giant, in a regulatory filing, said it has been reducing the production of certain memory products by unspecified “meaningful levels” to optimize its manufacturing operations, adding it has sufficient supplies of those chips to meet demand fluctuations. 

The company predicted an operating profit of $455 million for the three months through March, which would be a 96% decline from the same period a year earlier. It said sales during the quarter likely fell 19% to $47.7 billion. 

Samsung, which will release its finalized first quarter earnings later this month, said the demand for its memory chips declined as a weak global economy depressed consumer spending on technology products and forced business clients to adjust their inventories to nurse worsening finances. 

Samsung had reported a near 70% drop in profit for the October-December quarter, which partially reflected how global events like Russia’s war on Ukraine and high inflation have rattled technology markets. 

SK Hynix, another major South Korean semiconductor producer, said this week that it sold $1.7 billion in bonds that can be exchanged for the company’s shares to help fund its purchases of chipmaking materials as it weathers the industry’s downswing. SK Hynix had reported an operating loss of $1.28 billion for the October-December period, which marked its first quarterly deficit since 2012. 

“While we have lowered our short-term production plans, we expect solid demand for the mid- to long-term, so we will continue to invest in infrastructure to secure essential levels in clean room capacities and expand investment in research and development to strengthen our technology leadership,” Samsung said. 

Samsung last month announced plans to invest $227 billion over the next 20 years as part of an ambitious South Korean project to build the world’s largest semiconductor manufacturing base near the capital, Seoul. 

The chip-making “mega cluster,” which will be established in Gyeonggi province by 2042, will be anchored by five new semiconductor plants built by Samsung near its existing manufacturing hub. It will aim to attract 150 other companies producing materials and components or designing high-tech chips, according to South Korea’s government. 

The South Korean plan comes as other technology powerhouses, including the United States, Japan and China, are building up their domestic chip manufacturing, deploying protectionist measures, tax cuts and sizeable subsidies to lure investments. 

FBI Targets Users in Crackdown on Darknet Marketplaces

Darknet users, beware: If you frequent criminal marketplaces in the internet’s underbelly, think again. Chances are you’re in the FBI’s crosshairs.

The FBI is cracking down on sites that peddle everything from guns to stolen personal data, and it is not only going after the sites’ administrators but also their users.

A recent surge in ransomware attacks and other malicious cyber activities has fueled the effort to shut down services that cater to online criminals.

But the strategy hasn’t been always effective. With each takedown, a new iteration pops up drawing users with it. Which is why the FBI is eyeing both the operators and users of these sites.

“We’re not only trying to attack the supply side, but we’re also attacking the demand side with the users,” a senior FBI official said during a Wednesday briefing on the agency’s takedown of Genesis Market, a large online criminal marketplace. “There’s consequences if you’re going to be using these types of sites to engage in this type of activity.”

The darknet, the hidden part of the internet that can only be accessed by a special browser, has long been home to various criminal marketplaces and forums.

One type of criminal marketplace there specializes in buying and selling illegal items, such as drugs, firearms and fraudulently obtained gift cards.

Another type of market trades in sensitive data, such as stolen credit cards, bank account details and other information that can be used for criminal activity. These sites are known as “data stores.”

In recent years, a new breed of cyber criminals has emerged. Known as “initial access brokers,” these criminals specialize in selling access to compromised computer networks. Among their customers: ransomware gangs.

The takedown on Tuesday of Genesis Market, a 5-year-old criminal marketplace described by officials as an “initial access broker,” offers a window into this type of cyber-criminal activity.

It also shows how the FBI is increasingly going after users of criminal marketplaces and not just their administrators.

U.S. officials said Genesis Market was not only a seller of stolen account access credentials but was also “one of the most prolific” initial access brokers operating on the darknet.

Describing it as a “key enabler of ransomware,” the Justice Department said Genesis Market sold “the type of access sought by ransomware actors to attack computer networks in the United States and around the world.”

The site went dark on Tuesday after the FBI, working with law enforcement agencies in nearly 20 countries, including the U.K. and Canada, took it offline and arrested nearly 120 people.

In a statement, Attorney General Merrick Garland hailed the operation as “an unprecedented takedown of a major criminal marketplace that enabled cybercriminals to victimize individuals, businesses, and governments around the world.”

Genesis is one of two popular cyber-criminal marketplaces taken down by the FBI in the past month.

In March, the FBI shut down Breach Forums, a criminal forum and marketplace that boasted more than 340,000 members. On the Breach Forums website, users discussed tools and techniques for hacking and exploiting hacked information, according to the Justice Department.

“We’re going after the users who leverage a service like Genesis Market, and we are doing that on a global scale,” the FBI official said.

To take down Genesis Market, the FBI and its international law enforcement partners seized its servers and domains.

In doing so, the FBI was able to obtain information about 59,000 individual user accounts, a senior Justice Department official said during the briefing.

The information included usernames, passwords, email accounts, secure messenger accounts and user histories, the official said.

“And those records helped law enforcement uncover the true identities of many of the users,” the official said.

The users ran the gamut from online fraudsters to ransomware criminals.

Some of the users were in the U.S., officials said, declining to provide any other details about them. They were among the 119 people arrested around the world in connection with Genesis Market takedown.

Artemis Crew Looking Forward to Restarting NASA’s Moon Program

The last time humans were on the moon was in 1972. Now NASA is preparing to set foot back on the moon in 2025, if all goes as scheduled. VOA’s Alexander Kruglyakov spoke with the crew that will take part in the first of those missions: a planned flight around the Moon in November 2024.

US Chip Controls Threaten China’s Technology Ambitions

Furious at U.S. efforts that cut off access to technology to make advanced computer chips, China’s leaders appear to be struggling to figure out how to retaliate without hurting their own ambitions in telecoms, artificial intelligence and other industries.

Chinese leader Xi Jinping’s government sees the chips — which are used in everything from phones to kitchen appliances to fighter jets — as crucial assets in its strategic rivalry with Washington and efforts to gain wealth and global influence. Chips are the center of a “technology war,” a Chinese scientist wrote in an official journal in February.

China has its own chip foundries, but they supply only low-end processors used in autos and appliances. The U.S. government, starting under President Donald Trump, has been cutting off access to a growing array of tools to make chips for computer servers, AI and other advanced applications. Japan and the Netherlands have joined in limiting access to technology they say might be used to make weapons.

Xi, in unusually pointed language, accused Washington in March of trying to block China’s development with a campaign of “containment and suppression.” He called on the public to “dare to fight.”

Despite that, Beijing has been slow to retaliate against U.S. companies, possibly to avoid disrupting Chinese industries that assemble most of the world’s smartphones, tablet computers and other consumer electronics. They import more than $300 billion worth of foreign chips every year.

Investing in self-reliance

The ruling Communist Party is throwing billions of dollars at trying to accelerate chip development and reduce the need for foreign technology.

China’s loudest complaint: It is blocked from buying a machine available only from a Dutch company, ASML, that uses ultraviolet light to etch circuits into silicon chips on a scale measured in nanometers, or billionths of a meter. Without that, Chinese efforts to make transistors faster and more efficient by packing them more closely together on fingernail-size slivers of silicon are stalled.

Making processor chips requires some 1,500 steps and technologies owned by U.S., European, Japanese and other suppliers.

“China won’t swallow everything. If damage occurs, we must take action to protect ourselves,” the Chinese ambassador to the Netherlands, Tan Jian, told the Dutch newspaper Financieele Dagblad.

“I’m not going to speculate on what that might be,” Tan said. “It won’t just be harsh words.”

The conflict has prompted warnings the world might split into separate spheres with incompatible technology standards that mean computers, smartphones and other products from one region wouldn’t work in others. That would raise costs and might slow innovation.

“The bifurcation in technological and economic systems is deepening,” Prime Minister Lee Hsien Loong of Singapore said at an economic forum in China last month. “This will impose a huge economic cost.”

U.S.-Chinese relations are at their lowest level in decades due to disputes over security, Beijing’s treatment of Hong Kong, and Muslim ethnic minorities, territorial disputes, and China’s multibillion-dollar trade surpluses.

Chinese industries will “hit a wall” in 2025 or 2026 if they can’t get next-generation chips or the tools to make their own, said Handel Jones, a tech industry consultant.

China “will start falling behind significantly,” said Jones, CEO of International Business Strategies.

EV batteries as leverage

Beijing might have leverage, though, as the biggest source of batteries for electric vehicles, Jones said.

Chinese battery giant CATL supplies U.S. and Europe automakers. Ford Motor Co. plans to use CATL technology in a $3.5 billion battery factory in Michigan.

“China will strike back,” Jones said. “What the public might see is China not giving the U.S. batteries for EVs.”

On Friday, Japan increased pressure on Beijing by joining Washington in imposing controls on exports of chipmaking equipment. The announcement didn’t mention China, but the trade minister said Tokyo doesn’t want its technology used for military purposes.

A Chinese Foreign Ministry spokeswoman, Mao Ning, warned Japan that “weaponizing sci-tech and trade issues” would “hurt others as well as oneself.”

Hours later, the Chinese government announced an investigation of the biggest U.S. memory chip maker, Micron Technology Inc., a key supplier to Chinese factories. The Cyberspace Administration of China said it would look for national security threats in Micron’s technology and manufacturing but gave no details.

The Chinese military also needs semiconductors for its development of stealth fighter jets, cruise missiles and other weapons.

Chinese alarm grew after President Joe Biden in October expanded controls imposed by Trump on chip manufacturing technology. Biden also barred Americans from helping Chinese manufacturers with some processes.

To nurture Chinese suppliers, Xi’s government is stepping up support that industry experts say already amounts to as much as $30 billion a year in research grants and other subsidies.

Biden Eyes AI Dangers, Says Tech Companies Must Make Sure Products are Safe

U.S. President Joe Biden said on Tuesday it remains to be seen whether artificial intelligence (AI) is dangerous, but underscored that technology companies had a responsibility to ensure their products were safe before making them public. 

Biden told science and technology advisers that AI could help in addressing disease and climate change, but it was also important to address potential risks to society, national security and the economy. 

“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he said at the start of a meeting of the President’s Council of Advisors on Science and Technology. When asked if AI was dangerous, he said, “It remains to be seen. It could be.” 

Biden spoke on the same day that his predecessor, former President Donald Trump, surrendered in New York over charges stemming from a probe into hush money paid to a porn actor. 

Biden declined to comment on Trump’s legal woes, and Democratic strategists say his focus on governing will create a politically advantageous split screen of sorts as his former rival, a Republican, deals with his legal challenges. 

The president said social media had already illustrated the harm that powerful technologies can do without the right safeguards. 

“Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people,” Biden said.  

He reiterated a call for Congress to pass bipartisan privacy legislation to put limits on personal data that technology companies collect, ban advertising targeted at children, and to prioritize health and safety in product development. 

Shares of companies that employ AI dropped sharply before Biden’s meeting, although the broader market was also selling off on Tuesday.  

Shares of AI software company C3.ai Inc. were down 24%, more than halving a four-session winning streak of nearly 40% through Monday. Thailand security firm Guardforce AI GFAI.O fell 29%, data analytics firm BigBear.ai BBAI.N was down 16% and conversation intelligence company SoundHound AI SOUN.O was down 13% late on Tuesday.  

AI is becoming a hot topic for policymakers. 

The tech ethics group Center for AI and Digital Policy has asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, which has wowed and appalled users with its human-like abilities to generate written responses to requests. 

Democratic U.S. Senator Chris Murphy has urged society to pause as it considers the ramifications of AI. 

Last year the Biden administration released a blueprint “Bill of Rights” to help ensure users’ rights are protected as technology companies design and develop AI systems.  

US-Trained Woman Teaching Digital Skills to Children in Rural Kenya

The digital divide is one of the biggest challenges to education in sub-Saharan Africa, where the United Nations says nearly 90% of students lack access to household computers, and 82% to the internet. In Kenya, the aid group TechLit Africa aims to change that by building scores of computer labs. Juma Majanga reports from Mogotio, Kenya.

Ukraine’s Destruction Brought to Life Through Virtual Reality Exhibit

An exhibition currently on display in Poland uses virtual reality to show the level of destruction Russia’s war has brought on Ukraine. For some visitors, the VR videos that can be viewed at the “Through the War” display have been overwhelming. Lesia Bakalets reports from Warsaw. Daniil Batushchak.

Australia Bans TikTok on Government Devices

Australia said Tuesday it will ban TikTok on government devices, joining a growing list of Western nations cracking down on the Chinese-owned app due to national security fears.   

Attorney-General Mark Dreyfus said the decision followed advice from the country’s intelligence agencies and would begin “as soon as practicable”.   

Australia is the last member of the secretive Five Eyes security alliance to pursue a government TikTok ban, joining its allies the United States, Britain, Canada and New Zealand.   

France, the Netherlands and the European Commission have made similar moves.   

Dreyfus said the government would approve some exemptions on a “case-by-case basis” with “appropriate security mitigations in place”.   

Cybersecurity experts have warned that the app — which boasts more than one billion global users — could be used to hoover up data that is then shared with the Chinese government.   

Surveys have estimated that as many as seven million Australians use the app — or about a quarter of the population.   

In a security notice outlining the ban, the Attorney-General’s Department said TikTok posed “significant security and privacy risks” stemming from the “extensive collection of user data”.   

China condemned the ban, saying it had “lodged stern representations” with Canberra over the move and urging Australia to “provide Chinese companies with a fair, transparent and non-discriminatory business environment”.   

“China has always maintained that the issue of data security should not be used as a tool to generalize the concept of national security, abuse state power and unreasonably suppress companies from other countries,” foreign ministry spokesperson Mao Ning said.   

‘No-brainer’    

But Fergus Ryan, an analyst with the Australian Strategic Policy Institute, said stripping TikTok from government devices was a “no-brainer”.   

“It’s been clear for years that TikTok user data is accessible in China,” Ryan told AFP.    

“Banning the use of the app on government phones is a prudent decision given this fact.”   

The security concerns are underpinned by a 2017 Chinese law that requires local firms to hand over personal data to the state if it is relevant to national security.   

Beijing has denied these reforms pose a threat to ordinary users.   

China “has never and will not require companies or individuals to collect or provide data located in a foreign country, in a way that violates local law”, the foreign ministry’s Mao said in March.   

‘Rooted in xenophobia’   

TikTok has said such bans are “rooted in xenophobia”, while insisting that it is not owned or operated by the Chinese government.    

The company’s Australian spokesman Lee Hunter said it would “never” give data to the Chinese government.   

“No one is working harder to make sure this would never be a possibility,” he told Australia’s Channel Seven.   

But the firm acknowledged in November that some employees in China could access European user data, and in December it said employees had used the data to spy on journalists.   

The app is typically used to share short, lighthearted videos and has exploded in popularity in recent years.   

Many government departments were initially eager to use TikTok as a way to connect with a younger demographic that is harder to reach through traditional media channels.   

New Zealand banned TikTok from government devices in March, saying the risks were “not acceptable in the current New Zealand Parliamentary environment”.    

Earlier this year, the Australian government announced it would be stripping Chinese-made CCTV cameras from politicians’ offices due to security concerns. 

Virgin Orbit Files for Bankruptcy, Seeks Buyer

Virgin Orbit, the satellite launch company founded by Richard Branson, has filed for Chapter 11 bankruptcy and will sell the business, the firm said in a statement Tuesday.   

The California-based company said last week it was laying off 85% of its employees — around 675 people — to reduce expenses due to its inability to secure sufficient funding.   

Virgin Orbit suffered a major setback earlier this year when an attempt to launch the first rocket into space from British soil ended in failure.   

The company had organized the mission with the UK Space Agency and Cornwall Spaceport to launch nine satellites into space.   

On Tuesday, the firm said “it commenced a voluntary proceeding under Chapter 11 of the U.S. Bankruptcy Code… in order to effectuate a sale of the business” and intended to use the process “to maximize value for its business and assets.”   

Last month, Virgin Orbit suspended operations for several days while it held funding negotiations and explored strategic opportunities.   

But at an all-hands meeting on Thursday, CEO Dan Hart told employees that operations would cease “for the foreseeable future,” US media reported at the time.   

“While we have taken great efforts to address our financial position and secure additional financing, we ultimately must do what is best for the business,” Hart said in the company statement on Tuesday.   

“We believe that the cutting-edge launch technology that this team has created will have wide appeal to buyers as we continue in the process to sell the Company.”   

Founded by Branson in 2017, the firm developed “a new and innovative method of launching satellites into orbit,” while “successfully launching 33 satellites into their precise orbit,” Hart added.   

Virgin Orbit’s shares on the New York Stock Exchange were down 3% at 19 cents on Monday evening. 

Germany Could Block ChatGPT if Needed, Says Data Protection Chief

Germany could follow in Italy’s footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper in comments published on Monday.

Microsoft-backed MSFT.O OpenAI took ChatGPT offline in Italy on Friday after the national data agency banned the chatbot temporarily and launched an investigation into a suspected breach of privacy rules by the artificial intelligence application. 

“In principle, such action is also possible in Germany,” Ulrich Kelber said, adding that this would fall under state jurisdiction. He did not, however, outline any such plans. 

Kelber said that Germany has requested further information from Italy on its ban. Privacy watchdogs in France and Ireland said they had also contacted the Italian data regulator to discuss its findings. 

“We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” said a spokesperson for Ireland’s Data Protection Commissioner (DPC). 

OpenAI had said on Friday that it actively works to reduce personal data in training its AI systems. 

While the Irish DPC is the lead EU regulator for many global technology giants under the bloc’s “one stop shop” data regime, it is not the lead regulator for OpenAI, which has no offices in the EU.

The privacy regulator in Sweden said it has no plans to ban ChatGPT nor is it in contact with the Italian watchdog.

The Italian investigation into OpenAI was launched after a cybersecurity breach last week led to people being shown excerpts of other users’ ChatGPT conversations and their financial information. 

It accused OpenAI of failing to check the age of ChatGPT’s users, who are supposed to be aged 13 or above. Italy is the first Western country to take action against a chatbot powered by artificial intelligence. 

For a nine-hour period, the exposed data included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of credit card numbers, according to an email sent by OpenAI to one affected customer and seen by the Financial Times.

NASA to Reveal Crew for 2024 Flight Around the Moon

NASA is to reveal the names on Monday of the astronauts — three Americans and a Canadian — who will fly around the Moon next year, a prelude to returning humans to the lunar surface for the first time in a half century.   

The mission, Artemis II, is scheduled to take place in November 2024 with the four-person crew circling the Moon but not landing on it.   

As part of the Artemis program, NASA aims to send astronauts to the Moon in 2025 — more than five decades after the historic Apollo missions ended in 1972.   

Besides putting the first woman and first person of color on the Moon, the US space agency hopes to establish a lasting human presence on the lunar surface and eventually launch a voyage to Mars.   

NASA administrator Bill Nelson said this week at a “What’s Next Summit” hosted by Axios that he expected a crewed mission to Mars by the year 2040.  

The four members of the Artemis II crew will be announced at an event at 10:00 am (1500 GMT) at the Johnson Space Center in Houston.   

The 10-day Artemis II mission will test NASA’s powerful Space Launch System rocket as well as the life-support systems aboard the Orion spacecraft.   

The first Artemis mission wrapped up in December with an uncrewed Orion capsule returning safely to Earth after a 25-day journey around the Moon.   

During the trip around Earth’s orbiting satellite and back, Orion logged well over 1.6 million kilometers and went farther from Earth than any previous habitable spacecraft.   

Nelson was also asked at the Axios summit whether NASA could stick to its timetable of landing astronauts on the south pole of the Moon in late 2025.   

“Space is hard,” Nelson said. “You have to wait until you know that it’s as safe as possible, because you’re living right on the edge.   

“So I’m not so concerned with the time,” he said. “We’re not going to launch until it’s right.”   

Only 12 people — all of them white men — have set foot on the Moon. 

Twitter Pulls ‘Verified’ Check Mark From Main New York Times Account

Twitter has removed the verification check mark on the main account of The New York Times, one of CEO Elon Musk’s most despised news organizations.

The removal comes as many of Twitter’s high-profile users are bracing for the loss of the blue check marks that helped verify their identity and distinguish them from impostors on the social media platform.

Musk, who owns Twitter, set a deadline of Saturday for verified users to buy a premium Twitter subscription or lose the checks on their profiles. The Times said in a story Thursday that it would not pay Twitter for verification of its institutional accounts.

Early Sunday, Musk tweeted that the Times’ check mark would be removed. Later he posted disparaging remarks about the newspaper, which has aggressively reported on Twitter and on flaws with partially automated driving systems at Tesla, the electric car company, which he also runs.

Other Times accounts such as its business news and opinion pages still had either blue or gold check marks Sunday, as did multiple reporters for the news organization.

“We aren’t planning to pay the monthly fee for check mark status for our institutional Twitter accounts,” the Times said in a statement Sunday. “We also will not reimburse reporters for Twitter Blue for personal accounts, except in rare instances where this status would be essential for reporting purposes,” the newspaper said in a statement Sunday.

The Associated Press, which has said it also will not pay for the check marks, still had them on its accounts at midday Sunday.

Twitter did not answer emailed questions Sunday about the removal of The New York Times check mark.

The costs of keeping the check marks ranges from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts to ensure they are who they say they are, as was the case with the previous blue check doled out to public figures and others during the platform’s pre-Musk administration.

While the cost of Twitter Blue subscriptions might seem like nothing for Twitter’s most famous commentators, celebrity users from basketball star LeBron James to Star Trek’s William Shatner have balked at joining. Seinfeld actor Jason Alexander pledged to leave the platform if Musk takes his blue check away.

The White House is also passing on enrolling in premium accounts, according to a memo sent to staff. While Twitter has granted a free gray mark for President Joe Biden and members of his Cabinet, lower-level staff won’t get Twitter Blue benefits unless they pay for it themselves.

“If you see impersonations that you believe violate Twitter’s stated impersonation policies, alert Twitter using Twitter’s public impersonation portal,” said the staff memo from White House official Rob Flaherty.

Alexander, the actor, said there are bigger issues in the world but without the blue mark, “anyone can allege to be me” so if he loses it, he’s gone.

“Anyone appearing with it=an imposter. I tell you this while I’m still official,” he tweeted.

After buying Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.

Along with shielding celebrities from impersonators, one of Twitter’s main reasons to mark profiles with a blue check mark starting about 14 years ago was to verify politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, as an extra tool to curb misinformation coming from accounts that are impersonating people. Most “legacy blue checks” are not household names and weren’t meant to be.

One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.

The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently. 

Dutch Refinery to Feed Airlines’ Thirst for Clean Fuel 

Scaffolding and green pipes envelop a refinery in the port of Rotterdam where Finnish giant Neste is preparing to significantly boost production of sustainable aviation fuel. 

Switching to non-fossil aviation fuels that produce less net greenhouse gas emissions is key to plans to decarbonize air transport, a significant contributor to global warming. 

Neste, the largest global producer of SAF, uses cooking oil and animal fat at this Dutch refinery. 

Sustainable aviation fuels (SAF) are being made from different sources such as municipal waste, leftovers from the agricultural and forestry industry, crops and plants, and even hydrogen. 

These technologies are still developing, and the product is more expensive. 

But these fuels will help airlines reduce CO2 emissions by up to 80%, according to the International Air Transport Association. 

Global output of SAF was 250,000 tons last year, less than 0.1% of the more than 300 million tons of aviation fuel used during that period. 

“It’s a drop in the ocean but a significant drop,” said Matti Lehmus, CEO of Neste. 

“We’ll be growing drastically our production from 100,000 tons to 1.5 million tons next year,” he added. 

There clearly is demand. 

The European Union plans to impose the use of a minimum amount of sustainable aviation fuel by airlines, rising from 2% in 2025 to 6% in 2030 and at least 63% in 2050. 

Neste has another site for SAF in Singapore which will start production in April. 

“With the production facilities of Neste in Rotterdam and Singapore, we can meet the mandate for [the] EU in 2025,” said Jonathan Wood, the company’s vice president for renewable aviation. 

Vincent Etchebehere, director for sustainable development at Air France, said that “between now and 2030, there will be more demand than supply of SAF.” 

Need to mature technologies 

Air France-KLM has reached a deal with Neste for a supply of 1 million tons of sustainable aviation fuel between 2023 and 2030. 

It has also lined up 10 year-agreements with U.S. firm DG Fuels for 600,000 tons and with TotalEnergies for 800,000 tons. 

At the Rotterdam site, two giant storage tanks of 15,000 cubic meters are yet to be painted. 

They’re near a quay where the fuel will be transported by boat to feed Amsterdam’s Schiphol airport and airports in Paris. 

The Franco-Dutch group has already taken steps to cut its carbon footprint, using 15% of the global SAF output last year — or 0.6% of its fuel needs. 

Neste’s Lehmus said there was a great need to “mature the technologies” to make sustainable aviation fuel from diverse sources such as algae, nitrocellulose and synthetic fuels. 

Air France CEO Anne Rigail said, the prices of sustainable aviation fuel were as important as their production. 

Sustainable fuel costs 3,500 euros ($3,800) a ton globally but only $2,000 in the United States thanks to government subsidies. In France, it costs 5,000 euros a ton. 

“We need backing and we really think the EU can do more,” said Rigail. 

Italy Temporarily Blocks ChatGPT Over Privacy Concerns

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday.

The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data.

U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government’s request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

The agency’s statement cites the EU’s General Data Protection Regulation and pointed to a recent data breach involving ChatGPT “users’ conversations” and information about subscriber payments.

OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.

“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company had said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

Italy’s privacy watchdog, known as the Garante, also questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform’s algorithms. And it said ChatGPT can sometimes generate — and store — false information about individuals.

Finally, it noted there’s no system to verify users’ ages, exposing children to responses “absolutely inappropriate to their age and awareness.”

OpenAI said in response that it works “to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

The Italian watchdog’s move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

The president of Italy’s privacy watchdog agency told Italian state TV Friday evening he was one of those who signed the appeal. Pasquale Stanzione said he did so because “it’s not clear what aims are being pursued” ultimately by those developing AI.

Namibia Looks East for Green Hydrogen Partnerships

The administrator of the National Energy Administration of China, Zhang Jinhua, on Friday paid a visit to Namibia President Hage Geingob. The visit is aimed at establishing cooperation in the area of green hydrogen production.

Namibia is positioning itself as a future green hydrogen producer to attract investment from the globe’s leading and fastest growing producer of renewable energy — China.

James Mnyupe, Namibia’s green hydrogen commissioner and economic adviser to the president, told VOA that although Namibia has not signed a partnership with China on green hydrogen, officials are looking to the Asian country as a critical partner. But it isn’t talking to China alone.

“We have an MOU [Memo of Understanding] with Europe; we are also discussing possibilities of collaboration with the United States,” he said. “If you look at any of these green hydrogen projects as I mentioned, simply they will use components from all over the world.”

He said in the face of rising energy demands around the globe and increased tensions between the East and West, Namibia will not be drawn into picking sides. He was referring to the conflict in Ukraine and its effect on international relations

“So today Europe’s biggest trading partner is China, China’s biggest markets are the U.S. and Europe so if Namibia trades with Europe, China or the U.S. for that matter, that is not a reason for involving Namibia in any political or conflict-related discussions between those countries,” he said.

Presidential spokesperson Alfredo Hengari said the visit by U.S. Ambassador to Namibia Randy Berry on Tuesday was aimed at cementing relations in major areas of interest, among them green hydrogen and oil exploration.

“Namibia is making tremendous advances in the areas of green energy but also in hydrocarbons,” he said. “American companies are drilling off the coast of the Republic of Namibia and so it was a courtesy visit just to emphasize increasing cooperation in these areas.”

Speaking through an interpreter, China’s administrator for its National Energy Administration on Friday said China is ready to partner with Namibia in all areas of green hydrogen.

Hydrogen is an alternative fuel that industrialized nations hope can help them reach their ambitious goal of net-zero carbon emission by 2050.

Mnyupe says Namibia is looking to learn from China on how best to use its experience in producing renewable energy and renewable energy components. Friday’s visit is an indication of China’s interest in partnering with Namibia and participating in the countries green-hydrogen value chain.

Call for Pause in AI Development May Fall on Deaf Ears

A group of influential figures from Silicon Valley and the larger tech community released an open letter this week calling for a pause in the development of powerful artificial intelligence programs, arguing that they present unpredictable dangers to society.

The organization that created the open letter, the Future of Life Institute, said the recent rollout of increasingly powerful AI tools by companies like Open AI, IBM and Google demonstrates that the industry is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The signatories of the letter, including Elon Musk, founder of Tesla and SpaceX, and Steve Wozniak, co-founder of Apple, called for a six-month halt to all development work on large language model AI projects.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

The letter does not call for a halt to all AI-related research but focuses on extremely large systems that assimilate vast amounts of data and use it to solve complex tasks and answer difficult questions.

However, experts told VOA that commercial competition between different AI labs, and a broader concern about allowing Western companies to fall behind China in the race to develop more advanced applications of the technology, make any significant pause in development unlikely.

Chatbots offer window

While artificial intelligence is present in day-to-day life in myriad ways, including algorithms that curate social media feeds, systems used to make credit decisions in many financial institutions and facial recognition increasingly used in security systems, large language models have increasingly taken center stage in the discussion of AI.

In its simplest form, a large language model is an AI system that analyzes large amounts of textual data and uses a set of parameters to predict the next word in a sentence. However, models of sufficient complexity, operating with billions of parameters, are able to model human language, sometimes with uncanny accuracy.

In November of last year, Open AI released a program called ChatGPT (Chat Generative Pre-trained Transformer) to the general public. Based on the underlying GPT 3.5 model, the program allows users to communicate with the program by entering text through a web browser, which returns responses created nearly instantaneously by the program.

ChatGPT was an immediate sensation, as users used it to generate everything from complex computer code to poetry. Though it was quickly apparent that the program frequently returned false or misleading information, the potential for it to disrupt any number of sectors of life, from academia to customer service systems to national defense, was clear.

Microsoft has since integrated ChatGPT into its search engine, Bing. More recently, Google has rolled out its own AI-supported search capability, known as Bard.

GPT-4 as benchmark

In the letter calling for pause in development, the signatories use GPT-4 as a benchmark. GPT-4 is an AI tool developed by Open AI that is more powerful than the version that powers the original ChatGPT. It is currently in limited release. The moratorium being called for in the letter is on systems “more powerful than GPT-4.”

One problem though, is that it is not precisely clear what “more powerful” means in this context.

“There are other models that, in computational terms, are much less large or powerful, but which have very powerful potential impacts,” Bill Drexel, an associate fellow with the AI Safety and Stability program at the Center for a New American Security (CNAS), told VOA. “So there are much smaller models that can potentially help develop dangerous pathogens or help with chemical engineering — really consequential models that are much smaller.”

Limited capabilities

Edward Geist, a policy researcher at the RAND Corporation and the author of the forthcoming book Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare told VOA that it is important to understand both what programs like GPT-4 are capable of, but also what they are not.

For example, he said, Open AI has made it clear in technical data provided to potential commercial customers that once the model is trained on a set of data, there is no clear way to teach it new facts or to otherwise update it without completely retraining the system. Additionally, it does not appear to be able to perform tasks that require “evolving” memory, such as reading a book.

“There are, sort of, glimmerings of an artificial general intelligence,” he said. “But then you read the report, and it seems like it’s missing some features of what I would consider even a basic form of general intelligence.”

Geist said that he believes many of those warning about the dangers of AI are “absolutely earnest” in their concerns, but he is not persuaded that those dangers are as severe as they believe.

“The gap between that super-intelligent self-improving AI that has been postulated in those conjectures, and what GPT-4 and its ilk can actually do seems to be very broad, based on my reading of Open AI’s technical report about it.”

Commercial and security concerns

James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), told VOA he is skeptical that the open letter will have much effect, for reasons as varied as commercial competition and concerns about national security.

Asked what he thinks the chances are of the industry agreeing to a pause in research, he said, “Zero.”

“You’re asking Microsoft to not compete with Google?” Lewis said. “They’ve been trying for decades to beat Google on search engines, and they’re on the verge of being able to do it. And you’re saying, let’s take a pause? Yeah, unlikely.”

Competition with China

More broadly, Lewis said, improvements in AI will be central to progress in technology related to national defense.

“The Chinese aren’t going to stop because Elon Musk is getting nervous,” Lewis said. “That will affect [Department of Defense] thinking. If we’re the only ones who put the brakes on, we lose the race.”

Drexel, of CNAS, agreed that China is unlikely to feel bound by any such moratorium.

“Chinese companies and the Chinese government would be unlikely to agree to this pause,” he said. “If they agreed, they’d be unlikely to follow through. And in any case, it’d be very difficult to verify whether or not they were following through.”

He added, “The reason why they’d be particularly unlikely to agree is because — particularly on models like GPT-4 — they feel and recognize that they are behind. [Chinese President] Xi Jinping has said numerous times that AI is a really important priority for them. And so catching up and surpassing [Western companies] is a high priority.”

Li Ang Zhang, an information scientist with the RAND Corporation, told VOA he believes a blanket moratorium is a mistake.

“Instead of taking a fear-based approach, I’d like to see a better thought-out strategy towards AI governance,” he said in an email exchange. “I don’t see a broad pause in AI research as a tenable strategy but I think this is a good way to open a conversation on what AI safety and ethics should look like.”

He also said that a moratorium might disadvantage the U.S. in future research.

“By many metrics, the U.S. is a world leader in AI,” he said. “For AI safety standards to be established and succeed, two things must be true. The U.S. must maintain its world-lead in both AI and safety protocols. What happens after six months? Research continues, but now the U.S. is six months behind.”

Is Banning TikTok Constitutional?

U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.

But free speech advocates and legal experts say an outright ban would likely face a constitutional hurdle: the First Amendment right to free speech.

“If passed by Congress and enacted into law, a nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ First Amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” a coalition of free speech advocacy organizations wrote in a letter to Congress last week, urging a solution short of an outright ban.

The plea came as U.S. lawmakers grilled TikTok CEO Shou Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

TikTok, which bills itself as a “platform for free expression” and a “modern-day version of the town square,” says it has more than 150 million users in the United States.

But the platform is owned by ByteDance, a Beijing-based company, and U.S. officials have raised concerns that the Chinese government could utilize the app’s user data to influence and spy on Americans.

Aaron Terr, director of public advocacy at the Foundation for Individual Rights and Expression, said while there are legitimate privacy and national security concerns about TikTok, the First Amendment implications of a ban so far have received little public attention.

“If nothing else, it’s important for that to be a significant part of the conversation,” Terr said in an interview. “It’s important for people to consider alongside national security concerns.”

To be sure, the First Amendment is not absolute. There are types of speech that are not protected by the amendment. Among them: obscenity, defamation and incitement.

But the Supreme Court has also made it clear there are limits on how far the government can go to regulate speech, even when it involves a foreign adversary or when the government argues that national security is at stake.

In a landmark 1965 case, the Supreme Court invalidated a law that prevented Americans from receiving foreign mail that the government deemed was “communist political propaganda.”

In another consequential case involving a defamation lawsuit brought against The New York Times, the court ruled that even an “erroneous statement” enjoyed some constitutional protection.

“And that’s relevant because here, one of the reasons that Congress is concerned about TikTok is the potential that the Chinese government could use it to spread disinformation,” said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy and Technology, one of the signatories of the letter to Congress.

Proponents of a ban deny a prohibition would run afoul of the First Amendment.

“This is not a First Amendment issue, because we’re not trying to ban booty videos,” Republican Senator Marco Rubio, a longtime critic of TikTok, said on the Senate floor on Monday.

ByteDance, TikTok’s parent company, is beholden to the Chinese Communist Party, Rubio said.

“So, if the Communist Party goes to ByteDance and says, ‘We want you to use that algorithm to push these videos on Americans to convince them of whatever,’ they have to do it. They don’t have an option,” Rubio said.

The Biden administration has reportedly demanded that ByteDance divest itself from TikTok or face a possible ban.

TikTok denies the allegations and says it has taken measures to protect the privacy and security of its U.S. user data.

Rubio is sponsoring one of several competing bills that envision different pathways to a TikTok ban.

A House bill called the Deterring America’s Technological Adversaries Act would empower the president to shut down TikTok.

A Senate bill called the RESTRICT Act would authorize the Commerce Department to investigate information and communications technologies to determine whether they pose national security risks.

This would not be the first time the U.S. government has attempted to ban TikTok.

In 2020, then-President Donald Trump issued an executive order declaring a national emergency that would have effectively shut down the app.

In response, TikTok sued the Trump administration, arguing that the executive order violated its due process and First Amendment rights.

While courts did not weigh in on the question of free speech, they blocked the ban on the grounds that Trump’s order exceeded statutory authority by targeting “informational materials” and “personal communication.”

Allowing the ban would “have the effect of shutting down, within the United States, a platform for expressive activity used by about 700 million individuals globally,” including more than 100 million Americans, federal judge Wendy Beetlestone wrote in response to a lawsuit brought by a group of TikTok users.

A fresh attempt to ban TikTok, whether through legislation or executive action, would likely trigger a First Amendment challenge from the platform, as well as its content creators and users, according to free speech advocates. And the case could end up before the Supreme Court.

In determining the constitutionality of a ban, courts would likely apply a judicial review test known as an “intermediate scrutiny standard,” Vogus said.

“It would still mean that any ban would have to be justified by an important governmental interest and that a ban would have to be narrowly tailored to address that interest,” Vogus said. “And I think that those are two significant barriers to a TikTok ban.”

But others say a “content-neutral” ban would pass Supreme Court muster.

“To pass content-neutral laws, the government would need to show that the restraint on speech, if any, is narrowly tailored to serve a ‘significant government interest’ and leaves open reasonable alternative avenues for expression,” Joel Thayer, president of the Digital Progress Institute, wrote in a recent column in The Hill online newspaper.

In Congress, even as the push to ban TikTok gathers steam, there are lone voices of dissent.

One is progressive Democrat Alexandria Ocasio-Cortez. Another is Democratic Representative Jamal Bowman, himself a prolific TikTok user.

Opposition to TikTok, Bowman said, stems from “hysteria” whipped up by a “Red scare around China.”

“Our First Amendment gives us the right to speak freely and to communicate freely, and TikTok as a platform has created a community and a space for free speech for 150 million Americans and counting,” Bowman, who has more than 180,000 TikTok followers, said recently at a rally held by TikTok content creators.

Instead of singling out TikTok, Bowman said, Congress should enact new legislation to ensure social media users are safe and their data secure.

Blinken Urges Democracies to Use Technology to Help Citizens

U.S. Secretary of State Antony Blinken on Thursday urged democracies around the world to work together to ensure technology is used to promote democratic values and fight efforts by authoritarian regimes to use it to repress, control and divide citizens.

Blinken made the comments as he led a discussion on “Advancing Democracy and Internet Freedom in a Digital Age.” The session was part of U.S. President Joe Biden’s Summit for Democracy, a largely virtual gathering of leaders taking place this week from the State Department in Washington.

Blinken said the world is at the point where technology is “reorganizing the life of the world” and noted many countries are using these technologies to advance democratic principles and make life better for their citizens.

He pointed to the Maldives, where court hearings are being held online; Malaysia, where the internet was used to register 3 million new voters last year; and Estonia, where government services are delivered faster and more simply. 

At the same time, Blinken said the internet is being used more and more to spread disinformation and foment dissent. He said the U.S. and its democratic partners must establish rules and norms to promote an open, free and safe internet.

The secretary of state identified four priorities to help meet this goal, including using technology to improve people’s lives in tangible ways, establishing rights-respecting rules for emerging technologies, investing in innovation, and countering the effects of authoritarian governments’ use of digital tools to abuse citizens and weaken democracies.

Since the summit began earlier the week, the White House has emphasized the desire of the U.S. to make “technology work for and not against democracy.”

On Wednesday, the prime ministers of eight European countries signed an open letter to the chief executives of major social media companies calling for them to be more aggressive in blocking the spread of false information on their platforms. The leaders of Ukraine, Moldova, Poland, the Czech Republic, Estonia, Latvia, Lithuania and Slovakia signed the letter.

The statement told the companies their tech platforms “have become virtual battlegrounds, and hostile foreign powers are using them to spread false narratives that contradict reporting from fact-based news outlets.” 

It went on to say advertisements and artificial amplification on Meta’s platforms, which include Facebook, are often used to call for social unrest, bring violence to the streets and destabilize governments.

About 120 global leaders are participating in the summit. It is seen as Biden’s attempt to bolster the standing of democracies as autocratic governments advance their own agendas, such as Russia’s 13-month invasion of Ukraine, and China’s alliance with Moscow.

In a statement as the summit opened Tuesday, the White House said, “President Biden has called the struggle to bolster democratic governance at home and abroad the defining challenge of our time.” 

The statement went on to say, “Democracy — transparent and accountable government of, for, and by the people — remains the best way to realize lasting peace, prosperity, and human dignity.” 

Tech Leaders Sign Letter Calling for ‘Pause’ to Artificial Intelligence 

An open letter signed by Elon Musk, Apple co-founder Steve Wozniak and other prominent high-tech experts and industry leaders is calling on the artificial intelligence industry to take a six-month pause for the development of safety protocols regarding the technology.

The letter — which as of early Thursday had been signed by nearly 1,400 people — was drafted by the Future of Life Institute, a nonprofit group dedicated to “steering transformative technologies away from extreme, large-scale risks and towards benefiting life.”

In the letter, the group notes the rapidly developing capabilities of AI technology and how it has surpassed human performance in many areas. The group uses the example of how AI used to create new drug treatments could easily be used to create deadly pathogens.

Perhaps most significantly, the letter points to the recent introduction of GPT-4, a program developed by San Francisco-based company OpenAI, as a standard for concern.

GPT stands for Generative Pre-trained Transformer, a type of language model that uses deep learning to generate human-like conversational text.

The company has said GPT-4, its latest version, is more accurate and human-like and has the ability to analyze and respond to images. The firm says the program has passed a simulated bar exam, the test that allows someone to become a licensed attorney.

In its letter, the group maintains that such powerful AI systems should be developed “only once we are confident that their effects will be positive and their risks will be manageable.”

Noting the potential a program such as GPT-4 could have to create disinformation and propaganda, the letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter says AI labs and independent experts should use the pause “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that will ensure they are safe beyond a reasonable doubt.”

Meanwhile, another group has taken its concerns about the negative potential for GPT-4 a step further.

The nonprofit Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission on Thursday calling on the agency to suspend further deployment of the system and launch an investigation.

In its complaint, the group said the technical description of the GPT-4 system provided by its own makers describes almost a dozen major risks posed by its use, including “disinformation and influence operations, proliferation of conventional and unconventional weapons,” and “cybersecurity.”

Some information for this report was provided by The Associated Press and Reuters.