All posts by MTechnology

Companies find solutions to power EVs in energy-challenged Africa

NAIROBI, KENYA — Some companies are coming up with creative ways of making electric vehicles a more realistic option in power-challenged areas of Africa.

Countries in Africa have been slow adopters of battery-powered vehicles because finding reliable sources of electricity is a challenge in many places.

The Center for Strategic and International Studies described Africa as “the most energy-deficient continent in the world” and said that any progress made in electricity access in the last five years has been reversed by the pandemic and population growth.

Onesmus Otieno, for one, regrets trading in his diesel-powered motor bike for an electric one. He earns his living making deliveries and ferrying passengers around Nairobi, Kenya’s capital, with his bike.

The two-wheeled taxis popularly known as “boda boda” in Swahili are commonly used in Kenya and throughout Africa. Kenyan authorities recently introduced the electric bikes to phase out diesel ones. Otieno is among the few riders who adopted them, but he said finding a place to charge his bike has been a headache.

Sometimes the battery dies while he is carrying a customer, he said, while a charging station is far away. So, he has to end that trip and cancel other requests.

To address the problem, Chinese company Beijing Sebo created a mobile application that allows users of EVs to request a charge through the app. Then, charging equipment is brought to the user’s location.

Lin Lin, general manager for overseas business of Beijing Sebo, said because the company produces the equipment, it can control costs.

“We can deploy the product … in any country they need, and they don’t need to build or fix charging stations,” Lin said. “We can move to the location of the user, and we can bring electricity to electric vehicles.”

Lin said the mobile charging vans use electricity generated from solid waste and can charge up to five cars at one time for about $7 per vehicle — less for a motorbike.

Countries in Africa have been slow to adopt electric vehicles because there is a lack of infrastructure to support the technology, analysts say. The cost of EVs is another barrier, said clean energy expert Ajay Mathur.

”Yes, the capital cost is more,” Mathur said. “The first cost is more, but you recover it in about six years or so. We are at the beginning of the revolution.”

Electric motor bike maker Spiro offers a battery-swapping service in several countries to address the lack of EV infrastructure.

But studies show that for many African countries, access to reliable and affordable electricity remains a challenge. There are frequent power cuts, outages and voltage fluctuations in several regions.

Companies such as Beijing Sebo and Spiro are finding ways around the lack of power in Africa.

”We want to solve the problem of charging anxiety anywhere you are,” Lin said. 

This story originated in VOA’s Mandarin Service.

Cryptocurrency promoters on X amplify China-aligned disinformation

Washington — A group of accounts that regularly promote cryptocurrency-related content on X have amplified messages from Chinese official accounts and a China-linked disinformation operation covertly pushing Beijing’s propaganda toward Western social media users known as “Spamouflage”.

Spamouflage accounts are bots pretending to be authentic users that promote narratives that align with Beijing’s talking points issues, such as the COVID-19 pandemic, China’s human rights record, the war in Ukraine and the conflict in Gaza.

The cryptocurrency accounts were discovered by a joint investigation between VOA Mandarin and DoubleThink Lab, a Taiwan-based social media analytics firm.

DoubleThink Lab’s analysis uncovered 1,153 accounts that primarily repost news and promotions about cryptocurrency and are likely bots deployed by engagement boosting services to raise their clients’ visibility on social media.

The findings suggest that some official Chinese X accounts and the Spamouflage operation have been using the same amplification services, which further indicate the link between the Chinese state and Spamouflage.

Beijing has repeatedly denied any attempts to spread disinformation in the United States and other countries.

From cryptocurrency to Spamouflage

A review of the accounts in the VOA-DTL investigation shows that the majority of the posts were about cryptocurrency. Users regularly repost content from some of the biggest cryptocurrency accounts on X, such as ChainGPT and LondonRealTV, which belongs to British podcaster Brian Rose.

But these accounts have also shared content from at least 17 Spamouflage accounts that VOA and DTL have been tracking.

VOA recently reported on Spamouflage networks’ adoption of antisemitic tropes and conspiracy theories.

Spamouflage was first detected by the U.S.-based social media analytic firm Graphika, who coined the name because the operation’s political posts were interspersed with innocuous but spam-like content such as TikTok videos and scenery photographs that camouflage the operation’s goal of influencing public opinions.

All cryptocurrency accounts have reposted content from a Spamouflage account named “Watermelon cloth” at least once. A review of the account revealed that “Watermelon cloth” regularly posted content critical of social inequalities in the United States, the Ukrainian and Israeli governments, and praised China’s economic achievements and leadership role in solving international issues.

In one post, the account peddled the conspiracy theory that Washington was developing biological weapons in Ukraine.

“The outbreak of the Russo-Ukrainian war brought out an ‘unspeakable secret’ in the United States. US biological laboratory in Ukraine exposed,” the post said. X recently suspended Watermelon cloth’s account.

Since Watermelon cloth’s first posting in March 2023, its content has been reposted nearly 2,600 times, half of which were by the cryptocurrency accounts. Most of the remaining reposts were either by Spamouflage or other botlike accounts, according to data collected by DoubleThink Lab. The investigation also found that the cryptocurrency accounts’ amplification on average almost tripled the view number of a post.

Robotic behavior

All 1,153 cryptocurrency accounts have demonstrated patterns that strongly suggest they are bots instead of human users.

They were created in batches on specific dates. On April 6 alone, 152 of them were registered on X.

Over 99% of their content were reposts. A study of their repost behaviors on September 24 shows that all the reposts took place within the first hour after the original content was posted. Within each wave of reposts, all took place within six seconds, an indication of coordinated action.

At least one such account offered engagement boosting services in its bio with two Telegram links for interested customers. VOA Mandarin contacted the service seller through the links but did not receive a response.

Chinese official accounts amplified

The cryptocurrency group has also promoted posts from Chinese official accounts, including several that belong to Chinese local governments, state media and at least one Chinese diplomat.

The Jinan International Communication Center was the third most amplified account whose posts the cryptocurrency groups have shared. Its content was reposted over 2,200 times.

The Jinan International Communication Center was established in 2022 to promote the history and culture of Jinan, capital of the Shandong province in Eastern China, to the rest of the world as part of Beijing’s “Tell China’s Story Well” propaganda initiative.

A local state media account boasted in an article last year that Jinan was the third most influential Chinese city on X, which was then called Twitter.

Other Chinese cities, including Xiamen and Ningbo, and provinces, such as Anhui and Jilin, had their official accounts amplified by the cryptocurrency group.

Other amplified accounts include Xi’s Moments, a state media project propagating Chinese leader Xi Jinping’s speeches and official activities; China Retold, a media group organized by pro-Beijing politicians in Hong Kong; and the English-language state-owned newspaper China Daily.

Zhang Heqing, a cultural counselor at the Chinese Embassy in Pakistan, was the sole Chinese diplomat whose posts were promoted by the cryptocurrency group.

DoubleThink Lab wrote in an analysis of the data and findings that Chinese official accounts and the Spamouflage operation have “likely” used the same content boosting services, which explains why they were amplified by the same group of cryptocurrency accounts.

The Chinese Embassy in Washington, D.C., declined to answer specific questions about what appears to be a connection between the cryptocurrency group, Chinese official accounts and Spamouflage.

But in a written statement, spokesperson Liu Pengyu rejected the notion that China has used disinformation campaigns to influence social media users in the U.S.

“Such allegations are full of malicious speculations against China, which China firmly opposes,” the statement said.

US military, intelligence agencies ordered to embrace AI

washington — The Pentagon and U.S. intelligence agencies have new marching orders — to more quickly embrace and deploy artificial intelligence as a matter of national security.

U.S. President Joe Biden signed the directive, part of a new national security memorandum, on Thursday. The goal is to make sure the United States remains a leader in AI technology while also aiming to prevent the country from falling victim to AI tools wielded by adversaries like China.

The memo, which calls AI “an era-defining technology” also lays out guidelines that the White House says are designed to prevent the use of AI to harm civil liberties or human rights.

The new rules will “ensure that our national security agencies are adopting these technologies in ways that align with our values,” a senior administration official told reporters, speaking about the memo on the condition of anonymity before its official release.

The official added that a failure to more quickly adopt AI “could put us at risk of a strategic surprise by our rivals.”

“Because countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it’s particularly imperative that we accelerate our national security community’s adoption and use of cutting-edge AI,” the official said.

The new guidelines build on an executive order issued last year, which directed all U.S. government agencies to craft policies for how they intend to use AI.

They also seek to address issues that could hamper Washington’s ability to more quickly incorporate AI into national security systems.

Provisions outlined in the memo call for a range of actions to protect the supply chains the produce advanced computer chips that are critical for AI systems. It also calls for additional actions to combat economic espionage that would allow U.S. adversaries or non-U.S. companies from stealing critical innovations.

“We have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead,” said White House National Security Adviser Jake Sullivan, addressing an audience at the National Defense University in Washington on Thursday.

“The stakes are high,” he said. “If we don’t act more intentionally to seize our advantages, if we don’t deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead.

“We could have the best team but lose because we didn’t put it on the field,” he added.

Although the memo prioritizes the implementation of AI technologies to safeguard U.S. interests, it also directs officials to work with allies and others to create a stable framework for use of AI technologies across the globe.

“A big part of the national security memorandum is actually setting out some basic principles,” Sullivan said, citing ongoing talks with the G-7 and AI-related resolutions at the United Nations.

“We need to ensure that people around the world are able to seize the benefits and mitigate the risks,” he said.

AI decodes oinks and grunts to keep pigs happy in Danish study

VIPPEROD, Denmark — European scientists have developed an artificial intelligence algorithm capable of interpreting pig sounds, aiming to create a tool that can help farmers improve animal welfare.

The algorithm could potentially alert farmers to negative emotions in pigs, thereby improving their well-being, according to Elodie Mandel-Briefer, a behavioral biologist at University of Copenhagen who is co-leading the study.

The scientists, from universities in Denmark, Germany, Switzerland, France, Norway and the Czech Republic, used thousands of recorded pig sounds in different scenarios, including play, isolation and competition for food, to find that grunts, oinks, and squeals reveal positive or negative emotions.

While many farmers already have a good understanding of the well-being of their animals by watching them in the pig pen, existing tools mostly measure their physical condition, said Mandel-Briefer.

“Emotions of animals are central to their welfare, but we don’t measure it much on farms,” she said.

The algorithm demonstrated that pigs kept in outdoor, free-range or organic farms with the ability to roam and dig in the dirt produced fewer stress calls than conventionally raised pigs. The researchers believe that this method, once fully developed, could also be used to label farms, helping consumers make informed choices.

“Once we have the tool working, farmers can have an app on their phone that can translate what their pigs are saying in terms of emotions,” Mandel-Briefer said.

Short grunts typically indicate positive emotions, while long grunts often signal discomfort, such as when pigs push each other by the trough. High-frequency sounds like screams or squeals usually mean the pigs are stressed, for instance, when they are in pain, fight, or are separated from each other.

The scientists used these findings to create an algorithm that employs AI.

“Artificial intelligence really helps us to both process the huge amount of sounds that we get, but also to classify them automatically,” Mandel-Briefer said.

China space plan highlights commitment to space exploration, analysts say

Chinese officials recently released a 25-year space exploration plan that details five major scientific themes and 17 priority areas for scientific breakthroughs with one goal: to make China a world leader in space by 2050 and a key competitor with the U.S. in space, for decades to come.

Last week, the Chinese Academy of Sciences, the China National Space Administration, and the China Manned Space Agency jointly released a space exploration plan for 2024 through 2050.

It includes searching for extraterrestrial life, exploring Mars, Venus, and Jupiter, sending space crews to the moon and building an international lunar research station by 2025.

Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies, says the plan highlights China’s long-term commitment and answers some lingering questions as well.

“I think a lot of experts have wondered if China would continue to invest in space, particularly in science and exploration, given a lot of economic uncertainties in China … but this is a sign that they’re committed,” Swope said.

The plan reinforces a “commitment to really look at space science and exploration in the long term and not just short term,” he added.

The plan outlines Beijing’s goal to send astronauts to the moon by 2030, obtain and retrieve the first samples from Mars and successfully complete a mission to the Jupiter system in the next few years. It also outlines three phases of development, each with specific goals in terms of space exploration and key scientific discoveries.

The extensive plan is not only a statement that Beijing can compete with the U.S. in high-tech industries, it is also a way of boosting national pride, analysts say. 

“Space in particular has a huge public awareness, public pride,” says Nicholas Eftimiades, a retired senior intelligence officer and senior fellow at the Atlantic Council, a Washington-based think tank. “It emboldens the Chinese people, gives them a strong sense of nationalism and superiority, and that’s what the main focus of the Bejing government is.”

 

Swope agrees.

“I think it’s [China’s long-term space plan] a manifestation of China’s interest and desire from a national prestige and honor standpoint to really show that it’s a player on the international stage up there with the United States,” he said.

Antonia Hmaidi, a senior analyst at the Mercator Institute for China Studies, told VOA in an email response that, “China’s space focus goes back to the 1960,” and that “China has also been very successful at meeting its own goals and timelines.”

In recent years China has carried out several successful space science missions including Chang’e-4, which marked the world’s first soft landing and roving on the far side of the moon, Change’e-5, a mission that returned a sample from the moon back to Beijing for the first time, and Tianwen-1, a space mission that resulted in Chinese spacecraft leaving imprints on Mars. 

 

In addition, to these space missions, Bejing has implemented several programs aimed at increasing scientific discovery relating to space, particularly through the launch of several space satellites. 

Since 2011, China has developed and launched scientific satellites including Dark Matter Particle Explorer, Quantum Experiments at Space Scale, Advanced Space-based Solar Observatory, and the Einstein Probe.

While China continues to make progress with space exploration and scientific discovery, according to Swope, there is still a way to go before it catches up to the United States.

“China is undeniably the number 2 space power in the world today, behind the United States,” he said. “The United States is still by far the most important in a lot of measures and metrics, including in science and exploration.”

Eftimiades said one key reason the United States has maintained its lead in the space race is the success of Washington’s private, commercial aerospace companies.

 

“The U.S. private industry has got the jump on China,” Eftimiades said. “There’s no type of industrial control, industrial plan. In fact, Congress and administration shy away from that completely.”

Unlike the United States, large space entities in China are often state-owned, such as the China Aerospace Cooperation, Eftimiades said.

He adds that one advantage of China’s space entities being state-owned is the ability for the Chinese government to “direct their industries toward specific objectives.” At the same time, having bureaucracy involved with state-owned enterprises leads to less “cutting-edge technology.”

This year, China has focused on growing its space presence relative to the U.S. by conducting more orbital launches. 

Beijing planned to conduct 100 orbital launches this year, according to the state-owned China Aerospace Science and Technology Corporation, which was to conduct 70 of them. However, as of October 15, China had completed 48 orbital launches.

Last week, SpaceX announced it had launched its 100th rocket of the year and had another liftoff just hours later. The private company is aiming for 148 launches this year.

Earlier this year the U.S. Department of Defense implemented its first Commercial Space Integration Strategy, which outlined the department’s efforts to take technologies produced in the private sector and apply their uses for U.S. national security purposes.

In a statement released relating to the U.S. strategic plan, the Department of Defense explained its strategy to work closely with private and commercial sector space companies that are known to be innovative and have scalable production.

According to the statement, officials say “the strategy is based on the premise that the commercial space sector’s innovative capabilities, scalable production and rapid technology refresh rates provide pathways to enhance the resilience of DOD space capabilities and strengthen deterrence.”

Many space technologies have military applications, Swope said.

 

“A lot of things that are done in space have a dual use, so [space technologies] may be primarily used for scientific purposes, but also could be used to design and build and test some type of weapons technology,” Swope said.

Hmaidi says China’s newest space plan stands out for what it doesn’t have.

“The most interesting and striking part about China’s newest space plan to me was the narrow focus on basic science over military goals,” she told VOA in an email. “However, we know from open-source research that China is also very active in military space development.”

“This plan contains only one part of China’s space planning, namely the part that is unlikely to have direct military utility, while not mentioning other missions with direct military utility like its low-earth orbit internet program,” Hmaidi explained.

Chinese official urges Apple to continue ‘deepening’ presence in China

A top Chinese official has urged tech giant Apple to deepen its presence and investment in innovation in the world’s second largest economy at a time when supply chains and companies are shifting production and operations away from China.

As U.S.-China geopolitical tensions simmer and tech competition between Beijing and Western countries intensifies, foreign investment in China shrunk in 2023 to its lowest level in three decades, according to government statistics.

The United States has banned the export of advanced technology to China and Beijing’s crackdown on spying in the name of national security concerns has spooked investors.

On Wednesday, Jin Zhuanglong – China’s Minister for Industry and Information Technology – told Apple CEO Tim Cook he hoped that, “Apple will continue to deepen its presence in the Chinese market,” urging Cook to “increase investment in innovation, grow alongside Chinese firms, and share in the dividends of high-quality investment,” according to a ministry statement.

At the meeting Jin also discussed “Apple’s development in China, network data security management, (and) cloud services,” according to the statement.

China has the world’s largest market for smartphones, and Apple is a leading competitor. However, increasingly the iPhone producer has lost market share in the country due to an increasing number of local rivals in the smartphone sector.

In the second quarter of this year, AFP reports that Apple ranked sixth among smartphone vendors in China, holding a 16% market share, marking a drop of three positions compared to its ranking during the same period last year, according to analysis firm Canalys.

Jin also repeated a frequent pledge from officials in Beijing that China would strive to provide a “better environment” for global investors and “continue to expand high-level opening up.

Cook’s trip to China was his second of the year. His posts on the X-like Chinese social media platform Weibo showed he visited an Apple store in downtown Beijing, visited an organic farm, and toured ancient neighborhoods with prominent artists such as local photographer Chen Man.

Cook added that he met with students from China’s Agricultural University and Zhejiang University to receive feedback on how iPhones and iPads can help farmers adopt more sustainable practices. 

Some information in this report came from Reuters and AFP.

‘Garbage in, garbage out’: AI fails to debunk disinformation, study finds

Washington — When it comes to combating disinformation ahead of the U.S. presidential elections, artificial intelligence and chatbots are failing, a media research group has found.

The latest audit by the research group NewsGuard found that generative AI tools struggle to effectively respond to false narratives.

In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time and offer a nonresponse 38.33% of the time — leading to a “fail rate” of almost 40%, according to NewsGuard.

“These chatbots clearly struggle when it comes to handling prompt inquiries related to news and information,” said McKenzie Sadeghi, the audit’s author. “There’s a lot of sources out there, and the chatbots might not be able to discern between which ones are reliable versus which ones aren’t.”

NewsGuard has a database of false news narratives that circulate, encompassing global wars and U.S. politics, Sadeghi told VOA.

Every month, researchers feed trending false narratives into leading chatbots in three different forms: innocent user prompts, leading questions and “bad actor” prompts. From there, the researchers measure if AI repeats, fails to respond or debunks the claims.

AI repeats false narratives mostly in response to bad actor prompts, which mirror the tactics used by foreign influence campaigns to spread disinformation. Around 70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.

Foreign influence campaigns are able to take advantage of such flaws, according to the Office of the Director of National Intelligence. Russia, Iran and China have used generative AI to “boost their respective U.S. election influence efforts,” according to an intelligence report released last month.

As an example of how easily AI chatbots can be misled, Sadeghi cited a NewsGuard study in June that found AI would repeat Russian disinformation if it “masqueraded” as coming from an American local news source.

From myths about migrants to falsehoods about FEMA, the spread of disinformation and misinformation has been a consistent theme throughout the 2024 election cycle.

“Misinformation isn’t new, but generative AI is definitely amplifying these patterns and behaviors,” Sejin Paik, an AI researcher at Georgetown University, told VOA.

Because the technology behind AI is constantly changing and evolving, it is often unable to detect erroneous information, Paik said. This leads to not only issues with the factuality of AI’s output, but also the consistency.

NewsGuard also found that two-thirds of “high quality” news sites block generative AI models from using their media coverage. As a result, AI often has to learn from lower-quality, misinformation-prone news sources, according to the watchdog.

This can be dangerous, experts say. Much of the non-paywalled media that AI trains on is either “propaganda” or “deliberate strategic communication,” media scholar Matt Jordan told VOA.

“AI doesn’t know anything: It doesn’t sift through knowledge, and it can’t evaluate claims,” Jordan, a media professor at Penn State, told VOA. “It just repeats based on huge numbers.”

AI has a tendency to repeat “bogus” news because statistically, it tends to be trained on skewed and biased information, he added. He called this a “garbage in, garbage out model.”

NewsGuard aims to set the standard for measuring accuracy and trustworthiness in the AI industry through monthly surveys, Sadeghi said.

The sector is growing fast, even as issues around disinformation are flagged. The generative AI industry has experienced monumental growth in the past few years. OpenAI’s ChatGPT currently reports 200 million weekly users, more than double from last year, according to Reuters.

The growth in popularity of these tools leads to another problem in their output, according to Anjana Susarla, a professor in Responsible AI at Michigan State University. Since there is such a high quantity of information going in — from users and external sources — it is hard to detect and stop the spread of misinformation.

Many users are still willing to believe the outputs of these chatbots are true, Susarla said.

“Sometimes, people can trust AI more than they trust human beings,” she told VOA.

The solution to this may be bipartisan regulation, she added. She hopes that the government will encourage social media platforms to regulate malicious misinformation.

Jordan, on the other hand, believes the solution is with media audiences.

“The antidote to misinformation is to trust in reporters and news outlets instead of AI,” he told VOA. “People sometimes think that it’s easier to trust a machine than it is to trust a person. But in this case, it’s just a machine spewing out what untrustworthy people have said.”

Microsoft to allow autonomous AI agent development starting next month

Microsoft will allow customers to build autonomous artificial intelligence agents starting in November, the software giant said on Monday, in its latest move to tap the booming technology.

The company is positioning autonomous agents — programs which require little human intervention unlike chatbots — as “apps for an AI-driven world,” capable of handling client inquiries, identifying sales leads and managing inventory.

Other big technology firms such as Salesforce have also touted the potential of such agents, tools that some analysts say could provide companies with an easier path to monetizing the billions of dollars they are pouring into AI.

Microsoft said its customers can use Copilot Studio – an application that requires little knowledge of computer code – to create autonomous agents in public preview from November. It is using several AI models developed in-house and by OpenAI for the agents.

The company is also introducing ten ready-for-use agents that can help with routine tasks ranging from managing supply chain to expense tracking and client communications.

In one demo, McKinsey & Co, which had early access to the tools, created an agent that can manage client inquires by checking interaction history, identifying the consultant for the task and scheduling a follow-up meeting.

“The idea is that Copilot [the company’s chatbot] is the user interface for AI,” Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, told Reuters.

“Every employee will have a Copilot, their personalized AI agent, and then they will use that Copilot to interface and interact with the sea of AI agents that will be out there.”

Tech giants are facing investor pressure to show returns on their significant AI investments. Microsoft’s shares fell 2.8% in the September quarter, underperforming the S&P 500, but remain more than 10% higher for the year.

Some concerns have risen in recent months about the pace of Copilot adoption, with research firm Gartner saying in August its survey of 152 IT organizations showed that the vast majority had not progressed their Copilot initiatives past the pilot stage.

Tiny Caribbean island of Anguilla turns AI boom into digital gold mine

The artificial intelligence boom has benefited chatbot makers, computer scientists and Nvidia investors. It’s also providing an unusual windfall for Anguilla, a tiny island in the Caribbean.

ChatGPT’s debut nearly two years ago heralded the dawn of the AI age and kicked off a digital gold rush as companies scrambled to stake their own claims by acquiring websites that end in .ai.

That’s where Anguilla comes in. The British territory was allotted control of the .ai internet address in the 1990s. It was one of hundreds of obscure top-level domains assigned to individual countries and territories based on their names. While the domains are supposed to indicate a website has a link to a particular region or language, it’s not always a requirement.

Google uses google.ai to showcase its artificial intelligence services while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Startups like AI search engine Perplexity have also snapped up .ai web addresses, redirecting users from the .com version.

Anguilla’s earnings from web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI. The income now accounts for about 20% of Anguilla’s total government revenue. Before the AI boom, it hovered at around 5%.

Anguilla’s government, which uses the gov.ai home page, collects a fee every time an .ai web address is renewed. The territory signed a deal Tuesday with a U.S. company to manage the domains amid explosive demand but the fees aren’t expected to change. It also gets paid when new addresses are registered and expired ones are sold off. Some sites have fetched tens of thousands of dollars.

The money directly boosts the economy of Anguilla, which is just 91 square kilometers and has a population of about 16,000. Blessed with coral reefs, clear waters and palm-fringed white sand beaches, the island is a haven for uber-wealthy tourists. Still, many residents are underprivileged, and tourism has been battered by the pandemic and, before that, a powerful hurricane.

Anguilla doesn’t have its own AI industry though Premier Ellis Webster hopes that one day it will become a hub for the technology. He said it was just luck that it was Anguilla, and not nearby Antigua, that was assigned the .ai domain in 1995 because both places had those letters in their names.

Webster said the money takes the pressure off government finances and helps fund key projects but cautioned that “we can’t rely on it solely.”

“You can’t predict how long this is going to last,” Webster said in an interview with the AP. “And so I don’t want to have our economy and our country and all our programs just based on this. And then all of a sudden there’s a new fad comes up in the next year or two, and then we are left now having to make significant expenditure cuts, removing programs.”

To help keep up with the explosive growth in domain registrations, Anguilla said Tuesday it’s signing a deal with a U.S.-based domain management company, Identity Digital, to help manage the effort. They said the agreement will mean more revenue for the government while improving the resilience and security of the web addresses.

Identity Digital, which also manages Australia’s .au domain, expects to migrate all .ai domain services to its systems by the start of next year, Identity Digital Chief Strategy Officer Ram Mohan said in an interview.

A local software entrepreneur had previously helped Anguilla set up its registry system decades earlier.

There are now more than 533,000 .ai web domains, an increase of more than 10-fold since 2018. The International Monetary Fund said in a May report that the earnings will help diversify the economy, “thus making it more resilient to external shocks.

Webster expects domain-related revenues to rise further and could even double this year from last year’s $32 million.

He said the money will finance the airport’s expansion, free medical care for senior citizens and completion of a vocational technology training center at Anguilla’s high school.

The income also provides “budget support” for other projects the government is eyeing, such as a national development fund it could quickly tap for hurricane recovery efforts. The island normally relies on assistance from its administrative power, Britain, which comes with conditions, Webster said.

Mohan said working with Identity Digital will also defend against cyber crooks trying to take advantage of the hype around artificial intelligence.

He cited the example of Tokelau, an island in the Pacific Ocean, whose .tk addresses became notoriously associated with spam and phishing after outsourcing its registry services.

“We worry about bad actors taking something, sticking a .ai to it, and then making it sound like they are much bigger or much better than what they really are,” Mohan said, adding that the company’s technology will quickly take down shady sites.

Another benefit is .AI websites will no longer need to connect to the government’s digital infrastructure through a single internet cable to the island, which leaves them vulnerable to digital bottlenecks or physical disruptions.

Now they’ll use the company’s servers distributed globally, which means it will be faster to access them because they’ll be closer to users.

“It goes from milliseconds to microseconds,” Mohan said.

Drone maker DJI sues Pentagon over Chinese military listing

WASHINGTON — China-based DJI sued the U.S. Defense Department on Friday for adding the drone maker to a list of companies allegedly working with Beijing’s military, saying the designation is wrong and has caused the company significant financial harm.

DJI, the world’s largest drone manufacturer that sells more than half of all U.S. commercial drones, asked a U.S. District Judge in Washington to order its removal from the Pentagon list designating it as a “Chinese military company,” saying it “is neither owned nor controlled by the Chinese military.”

Being placed on the list represents a warning to U.S. entities and companies about the national security risks of conducting business with them.

DJI’s lawsuit says because of the Defense Department’s “unlawful and misguided decision” it has “lost business deals, been stigmatized as a national security threat, and been banned from contracting with multiple federal government agencies.”

The company added “U.S. and international customers have terminated existing contracts with DJI and refuse to enter into new ones.”

The Defense Department did not immediately respond to a request for comment.

DJI said on Friday it filed the lawsuit after the Defense Department did not engage with the company over the designation for more than 16 months, saying it “had no alternative other than to seek relief in federal court.”

Amid strained ties between the world’s two biggest economies, the updated list is one of numerous actions Washington has taken in recent years to highlight and restrict Chinese companies that it says may strengthen Beijing’s military.

Many major Chinese firms are on the list, including aviation company AVIC, memory chip maker YMTC, China Mobile 0941.HK, and energy company CNOOC.

In May, lidar manufacturer Hesai Group ZN80y.F filed a suit challenging the Pentagon’s Chinese military designation for the company. On Wednesday, the Pentagon removed Hesai from the list but said it will immediately relist the China-based firm on national security grounds.

DJI is facing growing pressure in the United States.

Earlier this week DJI told Reuters that Customs and Border Protection is stopping imports of some DJI drones from entering the United States, citing the Uyghur Forced Labor Prevention Act.

DJI said no forced labor is involved at any stage of its manufacturing.

U.S. lawmakers have repeatedly raised concerns that DJI drones pose data transmission, surveillance and national security risks, something the company rejects.

Last month, the U.S. House voted to bar new drones from DJI from operating in the U.S. The bill awaits U.S. Senate action. The Commerce Department said last month it is seeking comments on whether to impose restrictions on Chinese drones that would effectively ban them in the U.S. — similar to proposed Chinese vehicle restrictions. 

Residents on Kenya’s coast use app to track migratory birds

The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.

Chinese cyber association calls for review of Intel products sold in China 

BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests. 

While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC). 

“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said. 

Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review. 

Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.  

 

Tech firms increasingly look to nuclear power for data center

As energy-hungry computer data centers and artificial intelligence programs place ever greater demands on the U.S. power grid, tech companies are looking to a technology that just a few years ago appeared ready to be phased out: nuclear energy. 

After several decades in which investment in new nuclear facilities in the U.S. had slowed to a crawl, tech giants Microsoft and Google have recently announced investments in the technology, aimed at securing a reliable source of emissions-free power for years into the future.  

Earlier this year, online retailer Amazon, which has an expansive cloud computing business, announced it had reached an agreement to purchase a nuclear energy-fueled data center in Pennsylvania and that it had plans to buy more in the future. 

However, the three companies’ strategies rely on somewhat different approaches to the problem of harnessing nuclear energy, and it remains unclear which, if any, will be successful. 

Energy demand 

Data centers, which concentrate thousands of powerful computers in one location, consume prodigious amounts of power, both to run the computers themselves and to operate the elaborate systems put in place to dissipate the large amount of heat they generate.  

A recent study by Goldman Sachs estimated that data centers currently consume between 1% and 2% of all available power generation. That percentage is expected to at least double by the end of the decade, even accounting for new power sources coming online. The study projected a 160% increase in data center power consumption by 2030. 

The U.S. Department of Energy has estimated that the largest data centers can consume more than 100 megawatts of electricity, or enough to power about 80,000 homes. 

Small, modular reactors 

Google’s plan is, in some ways, the most radical departure — both from the current structure of the energy grid and from traditional means of generating nuclear power. The internet search giant announced on Monday that it has partnered with Kairos Power to fund the construction of up to seven small-scale nuclear reactors that, across several locations, would combine to generate 500 megawatts of power. 

The small modular reactors (SMRs) are a new, and largely untested, technology. Unlike sprawling nuclear plants, SMRs are compact, requiring much less infrastructure to keep them operational and safe. 

“The smaller size and modular design can reduce construction timelines, allow deployment in more places, and make the final project delivery more predictable,” Google and Kairos said in a press release.  

The companies said they intend to have the first of the SMRs online by 2030, with the rest to follow by 2035. 

Great promise 

Sola Talabi, president of Pittsburgh Technical, a nuclear consulting firm, told VOA that SMR technology holds great promise for the future. He said that the plants’ small size will eliminate many of the safety concerns that larger reactors present. 

For example, some smaller reactors generate so much less heat than larger reactors that they can utilize “passive” cooling systems that are not susceptible to the kind of mechanical failures that caused disaster at Japan’s Fukushima plant in 2011 and the Soviet Union’s Chernobyl plant in 1986.  

Talabi, who is also an adjunct faculty member in nuclear engineering at the University of Pittsburgh and University of Michigan, said that SMRs’ modular nature will allow for rapid deployment and substantial cost savings as time goes on. 

“Pretty much every reactor that has been built [so far] has been built like it’s the first one,” he said. “But with these reactors, because we will be able to use the same processes, the same facilities, to produce them, we actually expect that we will be able to … achieve deployment scale relatively quickly.” 

Raising doubts 

Not all experts are convinced that SMRs are going to live up to expectations. 

Edwin Lyman, director of nuclear power safety for the Union of Concerned Scientists, told VOA that the Kairos reactors Google is hoping to install use a new technology that has never been tested under real-world conditions.

“At this point, it’s just hope without any real basis in experimental fact to believe that this is going to be a productive and reliable solution for the need to power data centers over the medium term,” he said. 

He pointed out that the large-scale deployment of new nuclear reactors will also result in the creation of a new source of nuclear waste, which the U.S. is still struggling to find a way to dispose of at scale.  

“I think what we’re seeing is really a bubble — a nuclear bubble — which I suspect is going to be deflated once these optimistic, hopeful agreements turn out to be much harder to execute,” Lyman said. 

Three Mile Island 

Microsoft and Amazon have plotted a more conventional path toward powering their data centers with nuclear energy. 

In its announcement last month, Microsoft revealed that it has reached an agreement with Constellation Energy to restart a mothballed nuclear reactor at Three Mile Island in Pennsylvania and to use the power it produces for its data operations. 

Three Mile Island is best known as the site of the worst nuclear disaster in U.S. history. In 1979, the site’s Unit 2 reactor suffered a malfunction that resulted in radioactive gases and iodine being released into the local environment.  

However, the facility’s Unit 1 reactor did not fail, and it operated safely for several decades. It was shut down in 2019, after cheap shale gas drove the price of energy down so far that it made further operations economically unfeasible. 

It is expected to cost $1.6 billion to bring the reactor back online, and Microsoft has agreed to fund that investment. It has also signed an agreement to purchase power from the facility for 20 years. The companies say they believe that they can bring the facility back online by 2028. 

Amazon’s plan, by contrast, does not require either new technology or the resurrection of an older nuclear facility. 

The data center that the company purchased from Talen Energy is located on the same site as the fully operational Susquehanna nuclear plant in Salem, Pennsylvania, and draws power directly from it. 

Amazon characterized the $650 million investment as part of a larger effort to reach net-zero carbon emissions by 2040. 

Report: Iran cyberattacks against Israel surge after Gaza war

Israel has become the top target of Iranian cyberattacks since the start of the Gaza war last year, while Tehran had focused primarily on the United States before the conflict, Microsoft said Tuesday.

“Following the outbreak of the Israel-Hamas war, Iran surged its cyber, influence, and cyber-enabled influence operations against Israel,” Microsoft said in an annual report.

“From October 7, 2023, to July 2024, nearly half of the Iranian operations Microsoft observed targeted Israeli companies,” said the Microsoft Digital Defense Report.

From July to October 2023, only 10 percent of Iranian cyberattacks targeted Israel, while 35 percent aimed at American entities and 20 percent at the United Arab Emirates, according to the US software giant.

Since the war started Iran has launched numerous social media operations with the aim of destabilizing Israel.

“Within two days of Hamas’ attack on Israel, Iran stood up several new influence operations,” Microsoft said.

An account called “Tears of War” impersonated Israeli activists critical of Prime Minister Benjamin Netanyahu’s handling of a crisis over scores of hostages taken by Hamas, according to the report.

An account called “KarMa”, created by an Iranian intelligence unit, claimed to represent Israelis calling for Netanyahu’s resignation. 

Iran also began impersonating partners after the war started, Microsoft said.

Iranian services created a Telegram account using the logo of the military wing of Hamas to spread false messages about the hostages in Gaza and threaten Israelis, Microsoft said. It was not clear if Iran acted with Hamas’s consent, it added.

“Iranian groups also expanded their cyber-enabled influence operations beyond Israel, with a focus on undermining international political, military, and economic support for Israel’s military operations,” the report said.

The Hamas terror attack on October 7, 2023, resulted in the deaths of 1,206 people, mostly civilians, according to an AFP tally of official Israeli figures, including hostages killed in captivity.  

Israel’s retaliatory military campaign in Gaza has killed 42,289 people, the majority civilians, according to the health ministry in the Hamas-run territory. The U.N. has described the figures as reliable. 

Africa’s farming future could include more digital solutions

NAIROBI, KENYA — More than 400 delegates and organizations working in Africa’s farming sector are in Nairobi, Kenya, this week to discuss how digital agriculture can improve the lives of farmers and the continent’s food system.

Tech innovators discussed the need for increased funding, especially for women.

In past decades, African farmers have struggled to produce enough food to feed the continent.

DigiCow is one of the tech companies at the conference that says it has answers to the problem. The Kenya-based company says it provides farmers with digital recordkeeping, education via audio on an app, and access to financing and marketing.

Maureen Saitoti, DigiCow’s brand manager, said the platform has improved the lives of at least half a million farmers.

“Other than access to finance, it is also able to offer access to the market because a farmer is able to predict the harvest they are anticipating and begin conversations with buyers who have also been on board on the platform,” she said. “So, this has proven to provide a wholesome integration of the ecosystem, supporting small-scale farmers.”

Integrating digital systems into food production helps farmers gain access to seed, fertilizer and loans, and helps prevent pests and diseases on farms, organizers said.

Innovation in agriculture technology is seen as helping reach marginalized groups, including women.

Sieka Gatabaki, program director for Mercy Corps AgriFin, which is in 40 countries working with digital tool providers to increase the productivity and incomes of small-scale farmers, said his organization stresses education and practical information.

“We also focus on agronomic advice that gives the farmers the right kind of skills and knowledge to execute on their farms, as well as precision information such as weather that enables them to make the right decisions [about] how they grow and when they should grow and what they should grow in different geomatic climates,” Gatabaki said.

“Then we definitely expect that those farmers will increase their productivity and income.”

According to the State of AgTech Investment Report 2024, farming attracted $1.6 billion in funding in the past decade. But experts say the current funding is not enough to meet the sector’s growing demands.

David Saunder, director of strategy and growth at Briter Bridges, says funding systems have evolved to cope with problems faced by farmers and the food industry.

“Funding follows those businesses, those startups, that can viably grow and scale their businesses, and that’s what we are trying to do with AgTech to increase the data and information on those,” he said.

During the meeting, tech developers, experts and donors will also discuss how artificial intelligence and alternative data could be used to improve productivity.

US lawmakers seek answers from telecoms on Chinese hacking report

WASHINGTON — A bipartisan group of U.S. lawmakers asked AT&T, Verizon Communications, and Lumen Technologies on Friday to answer questions after a report that Chinese hackers accessed the networks of U.S. broadband providers. 

The Wall Street Journal reported Saturday hackers obtained information from systems the federal government uses for court-authorized wiretapping, and said the three companies were among the telecoms whose networks were breached. 

House Energy and Commerce Committee Chair Cathy McMorris Rodgers, a Republican, and the top Democrat on the committee Representative Frank Pallone along with Representatives Bob Latta and Doris Matsui asked the three companies to answer questions. They are seeking a briefing and detailed answers by next Friday. 

“There is a growing concern regarding the cybersecurity vulnerabilities embedded in U.S. telecommunications networks,” the lawmakers said. They are asking for details on what information was seized and when the companies learned about the intrusion. 

AT&T and Lumen declined to comment, while Verizon did not immediately comment. 

It was unclear when the hack occurred. 

Hackers might have held access for months to network infrastructure used by the companies to cooperate with court-authorized U.S. requests for communications data, the Journal said. It said the hackers had also accessed other tranches of internet traffic. 

China’s foreign ministry said on Sunday that it was not aware of the attack described in the report but said the United States had “concocted a false narrative” to “frame” China in the past. 

US states sue TikTok, saying it harms young users

NEW YORK/WASHINGTON — TikTok faces new lawsuits filed by 13 U.S. states and the District of Columbia on Tuesday, accusing the popular social media platform of harming and failing to protect young people.

The lawsuits, filed separately in New York, California, the District of Columbia and 11 other states, expand Chinese-owned TikTok’s legal fight with U.S. regulators and seek new financial penalties against the company.

Washington is located in the District of Columbia.

The states accuse TikTok of using intentionally addictive software designed to keep children watching as long and often as possible and misrepresenting its content moderation effectiveness.

“TikTok cultivates social media addiction to boost corporate profits,” California Attorney General Rob Bonta said in a statement. “TikTok intentionally targets children because they know kids do not yet have the defenses or capacity to create healthy boundaries around addictive content.”

TikTok seeks to maximize the amount of time users spend on the app in order to target them with ads, the states said.

“Young people are struggling with their mental health because of addictive social media platforms like TikTok,” said New York Attorney General Letitia James.

TikTok said on Tuesday that it strongly disagreed with the claims, “many of which we believe to be inaccurate and misleading,” and that it was disappointed the states chose to sue “rather than work with us on constructive solutions to industrywide challenges.”

TikTok provides safety features that include default screentime limits and privacy defaults for minors under 16, the company said.

Washington, D.C., Attorney General Brian Schwalb alleged that TikTok operates an unlicensed money transmission business through its livestreaming and virtual currency features.

“TikTok’s platform is dangerous by design. It’s an intentionally addictive product that is designed to get young people addicted to their screens,” Schwalb said in an interview.

Washington’s lawsuit accused TikTok of facilitating sexual exploitation of underage users, saying TikTok’s livestreaming and virtual currency “operate like a virtual strip club with no age restrictions.”

Illinois, Kentucky, Louisiana, Massachusetts, Mississippi, New Jersey, North Carolina, Oregon, South Carolina, Vermont and Washington state also sued on Tuesday.

In March 2022, eight states, including California and Massachusetts, said they launched a nationwide probe of TikTok impacts on young people.

The U.S. Justice Department sued TikTok in August for allegedly failing to protect children’s privacy on the app. Other states, including Utah and Texas, previously sued TikTok for failing to protect children from harm. TikTok on Monday rejected the allegations in a court filing.

TikTok’s Chinese parent company, ByteDance, is battling a U.S. law that could ban the app in the United States.

China-connected spamouflage networks spread antisemitic disinformation

washington — Spamouflage networks with connections to China are posting antisemitic conspiracy theories on social media, casting doubt on Washington’s independence from alleged Jewish influence and the integrity of the two U.S. presidential candidates, a joint investigation by VOA Mandarin and Taiwan’s Doublethink Lab, a social media analytics firm, has found.

The investigation has so far uncovered more than 30 such X posts, many of which claim or suggest that core American political institutions, including the White House and Congress, have pledged loyalty to or are controlled by Jewish elites and the Israeli government.

One post shows a graphic of 18 U.S. officials of Jewish descent, including Secretary of State Antony Blinken, Treasury Secretary Janet Yellen, and the head of the Homeland Security Department, Alejandro Mayorkas, and asks: “Jews only make up 2% of the U.S. population, so why do they have so many representatives in important government departments?!”

Another post shows a cartoon depicting Vice President Kamala Harris, the Democratic candidate for president, and her opponent, Donald Trump, having their tongues tangled together and wrapped around an Israeli flagpole. The post proclaims that “no matter who of them comes to power, they will not change their stance on Judaism.”

Most of the 32 posts analyzed by VOA Mandarin and Doublethink Lab were posted during July and August. The posts came from three spamouflage accounts, two of which were previously reported by VOA.

Each of the three accounts leads its own spamouflage network. The three networks consist of 140 accounts, which amplify content from the three main accounts, or seeders.

A spamouflage network is a state-sponsored operation disguised as the work of authentic social media users to spread pro-government narratives and disinformation while discrediting criticism from adversaries.

Jasper Hewitt, a digital intelligence analyst at Doublethink Lab, told VOA Mandarin that the impact of these antisemitic posts has been limited, as most of them failed to reach real users, despite having garnered over 160,000 views.

U.S. officials have cast China as one of the major threats looking to disrupt this year’s election. Beijing, however, has repeatedly denied these allegations and urged Washington to “not make an issue of China in the election.”

Tuvia Gering, a nonresident fellow at the Atlantic Council’s Global China Hub, has closely followed antisemitic disinformation coming from China. He told VOA Mandarin that Beijing isn’t necessarily hostile toward Jews, but anti-Semitic conspiracy theories have historically been a handy tool to be used against Western countries.

“You can trace its origins back to the Cold War, when the Soviet Union promoted antisemitic conspiracy theories all over the world just to instigate in Western societies,” Gering said, “because it divides them from within and it casts the West in a bad light in a strategic competition. [It’s] the same thing you see here [with China].”

Anti-Semitic speech floods Chinese internet

Similar antisemitic narratives about U.S. politics posted by the spamouflage accounts have long been flourishing on the Chinese internet.

An article that received thousands of likes and reposts on Chinese social media app WeChat claims that “Jewish capital” has completed its control of the American political sphere “through infiltration, marriages, campaign funds and lobbying.”

The article also brings up the Jewish heritage of many current and former U.S. officials and their families as evidence of the alleged Jewish takeover of America.

“The wife of the U.S. president is Jewish, the son-in-law of the former U.S. president is Jewish, the mother of the previous former U.S. president was Jewish, the U.S. Secretary of State is Jewish, the U.S. Secretary of Treasury is Jewish, the Deputy Secretary of State, the Attorney General … are all Jewish,” it wrote.

In fact, first lady Jill Biden is Roman Catholic, and the mother of former President Barack Obama was raised as a Christian. The others named are Jewish.

Conspiracy theories and misinformation abounded on the Chinese internet after the U.S. House of Representatives passed a bill in May that would empower the Department of Education to adopt a new set of standards when investigating antisemitism in educational programs.

Articles and videos assert that the bill marks the death of America because it “definitively solidifies the superior and unquestionable position of the Jews in America,” claiming falsely that anyone who’s labeled an antisemite will be arrested.

One video with more than 1 million views claimed that the New Testament of the Bible would be deemed illegal under the bill. And since all U.S. presidents took their inaugural oath with the Bible, the bill allegedly invalidates the legitimacy of the commander in chief. None of that is true.

The Chinese public hasn’t historically been hostile toward Jews. A 2014 survey published by the Anti-Defamation League, a U.S.-based group against antisemitism, found that only 20% of the participants from China harbored an antisemitic attitude.

But when the Israel-Hamas conflict broke out a year ago, the otherwise heavily censored Chinese social media was flooded with antisemitic comments and praise for Nazi Germany leader Adolf Hitler.

The Chinese government has dismissed criticism of antisemitism on its internet. When asked about it at a news conference last year, Wang Wenbin, then the spokesperson of the Foreign Ministry, said that “China’s laws unequivocally prohibit disseminating information on extremism, ethnic hatred, discrimination and violence via the internet.”

But online hate speech against Jews has hardly disappeared. Eric Liu, a former censor for Chinese social media Weibo who now monitors online censorship, told VOA Mandarin that whenever Israel is in the news, there would be a surge in online antisemitism.

Just last month, after dozens of members of the Lebanon-based militant group Hezbollah were killed by explosions of their pagers, Chinese online commentators acidly condemned Israel and Jews.

The attack “proves that Jews are the most terrifying and cowardly people,” one Weibo user wrote. “They are self-centered and believe themselves to be superior, when in fact they are considered the most indecent and shameless. When the time comes, it’s going to be blood for blood.”

Australia’s online dating industry agrees to code of conduct to protect users

MELBOURNE, Australia — A code of conduct will be enforced on the online dating industry to better protect Australian users after research found that three-in-four people suffer some form of sexual violence through the platforms, Australia’s government said on Tuesday.

Bumble, Grindr and Match Group Inc., a Texas-based company that owns platforms including Tinder, Hinge, OKCupid and Plenty of Fish, have agreed to the code that took effect on Tuesday, Communications Minister Michelle Rowland said.

The platforms, which account for 75% of the industry in Australia, have until April 1 to implement the changes before they are strictly enforced, Rowland said.

The code requires the platforms’ systems to detect potential incidents of online-enabled harm and demands that the accounts of some offenders are terminated.

Complaint and reporting mechanisms are to be made prominent and transparent. A new rating system will show users how well platforms are meeting their obligations under the code.

The government called for a code of conduct last year after the Australian Institute of Criminology research found that three-in-four users of dating apps or websites had experienced some form of sexual violence through these platforms in the five years through 2021.

“There needs to be a complaint-handling process. This is a pretty basic feature that Australians would have expected in the first place,” Rowland said on Tuesday.

“If there are grounds to ban a particular individual from utilizing one of those platforms, if they’re banned on one platform, they’re blocked on all platforms,” she added.

Match Group said it had already introduced new safety features on Tinder, including photo and identification verification to prevent bad actors from accessing the platform while giving users more confidence in the authenticity of their connections.

The platform used artificial intelligence to issue real-time warnings about potentially offensive language in an opening line and advising users to pause before sending.

“This is a pervasive issue, and we take our responsibility to help keep users safe on our platform very seriously,” Match Group said in a statement on Wednesday.

Match Group said it would continue to collaborate with the government and the industry to “help make dating safer for all Australians.”

Bumble said it shared the government’s hope of eliminating gender-based violence and was grateful for the opportunity to work with the government and industry on what the platform described as a “world-first dating code of practice.”

“We know that domestic and sexual violence is an enormous problem in Australia, and that women, members of LGBTQ+ communities, and First Nations are the most at risk,” a Bumble statement said.

“Bumble puts women’s experiences at the center of our mission to create a world where all relationships are healthy and equitable, and safety has been central to our mission from day one,” Bumble added.

Grindr said in a statement it was “honored to participate in the development of the code and shares the Australian government’s commitment to online safety.”

All the platforms helped design the code.

Platforms that have not signed up include Happn, Coffee Meets Bagel and Feeld.

The government expects the code will enable Australians to make better informed choices about which dating apps are best equipped to provide a safe dating experience.

The government has also warned the online dating industry that it will legislate if the operators fail to keep Australians safe on their platforms.

Arkansas sues YouTube over claims it’s fueling mental health crisis

little rock, arkansas — Arkansas sued YouTube and parent company Alphabet on Monday, saying the video-sharing platform is made deliberately addictive and fueling a mental health crisis among youth in the state.

Attorney General Tim Griffin’s office filed the lawsuit in state court, accusing them of violating the state’s deceptive trade practices and public nuisance laws. The lawsuit claims the site is addictive and has resulted in the state spending millions on expanded mental health and other services for young people.

“YouTube amplifies harmful material, doses users with dopamine hits, and drives youth engagement and advertising revenue,” the lawsuit said. “As a result, youth mental health problems have advanced in lockstep with the growth of social media, and in particular, YouTube.”

Alphabet’s Google, which owns the video service and is also named as a defendant in the case, denied the lawsuit’s claims.

“Providing young people with a safer, healthier experience has always been core to our work. In collaboration with youth, mental health and parenting experts, we built services and policies to provide young people with age-appropriate experiences, and parents with robust controls,” Google spokesperson Jose Castaneda said in a statement. “The allegations in this complaint are simply not true.”

YouTube requires users under 17 to get their parent’s permission before using the site, while accounts for users younger than 13 must be linked to a parental account. But it is possible to watch YouTube without an account, and kids can easily lie about their age.

The lawsuit is the latest in an ongoing push by state and federal lawmakers to highlight the impact that social media sites have on younger users. U.S. Surgeon General Vivek Murthy in June called on Congress to require warning labels on social media platforms about their effects on young people’s lives, like those now mandatory on cigarette boxes.

Arkansas last year filed similar lawsuits against TikTok and Facebook parent company Meta, claiming the social media companies were misleading consumers about the safety of children on their platforms and protections of users’ private data. Those lawsuits are still pending in state court.

Arkansas also enacted a law requiring parental consent for minors to create new social media accounts, though that measure has been blocked by a federal judge.

Along with TikTok, YouTube is one of the most popular sites for children and teens. Both sites have been questioned in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm.

YouTube in June changed its policies about firearm videos, prohibiting any videos demonstrating how to remove firearm safety devices. Under the new policies, videos showing homemade guns, automatic weapons and certain firearm accessories like silencers will be restricted to users 18 and older.

Arkansas’ lawsuit claims that YouTube’s algorithms steer youth to harmful adult content, and that it facilitates the spread of child sexual abuse material.

The lawsuit doesn’t seek specific damages, but asks that YouTube be ordered to fund prevention, education and treatment for “excessive and problematic use of social media.”

California governor vetoes bill to create first-in-nation AI safety measures

Sacramento, California — California Governor Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.

The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

Earlier in September, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

The legislation is among a host of bills passed by the legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take action this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated that AI developers follow requirements similar to those commitments, said the measure’s supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline in August. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier in September, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”