All posts by MTechnology

China’s Digital Silk Road exports internet technology, controls

washington — China promotes its help to Southeast Asian countries in modernizing their digital landscapes through investments in infrastructure as part of its “Digital Silk Road.” But rights groups say Beijing is also exporting its model of authoritarian governance of the internet through censorship, surveillance and controls.

China’s state media this week announced Chinese electrical appliance manufacturer Midea Group jointly built its first overseas 5G factory in Thailand with Thai mobile operator AIS, Chinese telecom service provider China Unicom and tech giant Huawei.

The 208,000-square-meter smart factory will have its own 5G network, Xinhua news agency reported.

Earlier this month, Beijing reached an agreement with Cambodia to establish a Digital Law Library of the Association of Southeast Asian Nations (ASEAN) Inter-Parliamentary Assembly. Cambodia’s Khmer Times said the objective is to “expand all-round cooperation in line with the strategic partnership and building a common destiny community.”

But parallel to China’s state media-promoted technology investments, rights groups say Beijing is also helping countries in the region to build what they call “digital authoritarian governance.”

Article 19, an international human rights organization dedicated to promoting freedom of expression globally and named after Article 19 of the Universal Declaration of Human Rights, in an April report said the purpose of the Digital Silk Road is not solely to promote China’s technology industry. The report, China: The rise of digital repression in the Indo-Pacific, says Beijing is also using its technology to reshape the region’s standards of digital freedom and governance to increasingly match its own.

VOA contacted the Chinese Embassy in the U.S. for a response but did not receive one by the time of publication.

Model of digital governance

Looking at case studies of Cambodia, Malaysia, Nepal and Thailand, the Article 19 report says Beijing is spreading China’s model of digital governance along with Chinese technology and investments from companies such as Huawei, ZTE and Alibaba.

Michael Caster, Asia digital program manager with Article 19, told VOA, “China has been successful at providing a needed service, in the delivery of digital development toward greater connectivity, but also in making digital development synonymous with the adoption of PRC [People’s Republic of China]-style digital governance, which is at odds with international human rights and internet freedom principles, by instead promoting notions of total state control through censorship and surveillance, and digital sovereignty away from universal norms.”

The group says in Thailand, home to the world’s largest overseas Chinese community, agreements with China bolstered internet controls imposed after Thailand’s 2014 coup, and it notes that Bangkok has since been considering a China-style Great Firewall, the censorship mechanism Beijing uses to control online content.

In Nepal, the report notes security and intelligence-sharing agreements with China and concerns that Chinese security camera technology is being used to surveil exiled Tibetans, the largest such group outside India.

The group says Malaysia’s approach to information infrastructure appears to increasingly resemble China’s model, citing Kuala Lumpur’s cybersecurity law passed in April and its partnering with Chinese companies whose technology has been used for repressing minorities inside China.

Most significantly, Article 19 says China is involved at “all levels” of Cambodia’s digital ecosystem. Huawei, which is facing increasing bans in Western nations over cybersecurity concerns, has a monopoly on cloud services in Cambodia.

While Chinese companies say they would not hand over private data to Beijing, experts doubt they would have any choice because of national security laws.

Internet gateway

Phnom Penh announced a decree in 2021 to build a National Internet Gateway similar to China’s Great Firewall, restricting the Cambodian people’s access to Western media and social networking sites.

“That we have seen the normalization of a China-style Great Firewall in some of the countries where China’s influence is most pronounced or its digital development support strongest, such as with Cambodia, is no coincidence,” Caster said.

The Cambodian government says the portal will strengthen national security and help combat tax fraud and cybercrime. But the Internet Society, a U.S.- and Switzerland-based nonprofit internet freedom group, says it would allow the government to monitor individual internet use and transactions, and to trace identities and locations.

Kian Vesteinsson, a senior researcher for technology and democracy with rights group Freedom House, told VOA, “The Chinese Communist Party and companies that are aligned with the Chinese state have led a charge internationally to push for internet fragmentation. And when I say internet fragmentation, I mean these efforts to carve out domestic internets that are isolated from global internet traffic.”

Despite Chinese support and investment, Vesteinsson notes that Cambodia has not yet implemented the plan for a government-controlled internet.

“Building the Chinese model of digital authoritarianism into a country’s internet infrastructure is extraordinarily difficult. It’s expensive. It requires technical capacity. It requires state capacity, and all signs point to the Cambodian government struggling on those fronts.”

Vesteinsson says while civil society and foreign political pressure play a role, business concerns are also relevant as requirements to censor online speech or spy on users create costs for the private sector.

“These governments that are trying to cultivate e-commerce should keep in mind that a legal environment that is free from these obligations to do censorship and surveillance will be more appealing to companies that are evaluating whether to start up domestic operations,” he said.

Article 19’s Caster says countries concerned about China’s authoritarian internet model spreading should do more to support connectivity and internet development worldwide.

“This support should be based on human rights law and internet freedom principles,” he said, “to prevent China from exploiting internet development needs to position its services – and often by extension its authoritarian model – as the most accessible option.”

China will hold its annual internet conference in Beijing July 9-11. China’s Xinhua news agency reports this year’s conference will discuss artificial intelligence, digital government, information technology application innovation, data security and international cooperation.

Adrianna Zhang contributed to this report.

Attempts to regulate AI’s hidden hand in Americans’ lives flounder

DENVER — The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide.

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

Colorado’s bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups.

Polis signed Colorado’s bill “with reservations,” saying in an statement he was wary of regulations dousing AI innovation. The bill has a two-year runway and can be altered before it becomes law.

“I encourage (lawmakers) to significantly improve on this before it takes effect,” Polis wrote.

Colorado’s proposal, along with six sister bills, are complex, but will broadly require companies to assess the risk of discrimination from their AI and inform customers when AI was used to help make a consequential decision for them.

The bills are separate from more than 400 AI-related bills that have been debated this year. Most are aimed at slices of AI, such as the use of deepfakes in elections or to make pornography.

The seven bills are more ambitious, applying across major industries and targeting discrimination, one of the technology’s most perverse and complex problems.

“We actually have no visibility into the algorithms that are used, whether they work or they don’t, or whether we’re discriminated against,” said Rumman Chowdhury, AI envoy for the U.S. Department of State who previously led Twitter’s AI ethics team.

While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different beast, which the U.S. is already behind in regulating.

“The computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class action lawsuits over discrimination including against Boeing and Tyson Foods. Now, Webber is nearing final approval on one of the first-in-the-nation settlements in a class action over AI discrimination.

“Not, I should say, that the old systems were perfectly free from bias either,” said Webber. But “any one person could only look at so many resumes in the day. So you could only make so many biased decisions in one day and the computer can do it rapidly across large numbers of people.”

When you apply for a job, an apartment or a home loan, there’s a good chance AI is assessing your application: sending it up the line, assigning it a score or filtering it out. It’s estimated as many as 83% of employers use algorithms to help in hiring, according to the Equal Employment Opportunity Commission.

AI itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. The historical data that is used to train algorithms can smuggle in bias.

Amazon, for example, worked on a hiring algorithm that was trained on old resumes: largely male applicants. When assessing new applicants, it downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data — the resumes — it had learned from. The project was scuttled.

Webber’s class action lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to Black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over Black patients for special care.

Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain veiled. Americans are largely unaware that these tools are being used, polling from Pew Research shows. Companies generally aren’t required to explicitly disclose that an AI was used.

“Just pulling back the curtain so that we can see who’s really doing the assessing and what tool is being used is a huge, huge first step,” said Webber. “The existing laws don’t work if we can’t get at least some basic information.”

That’s what Colorado’s bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.

Colorado’s bill will require companies using AI to help make consequential decisions for Americans to annually assess their AI for potential bias; implement an oversight program within the company; tell the state attorney general if discrimination was found; and inform to customers when an AI was used to help make a decision for them, including an option to appeal.

Labor unions and academics fear that a reliance on companies overseeing themselves means it’ll be hard to proactively address discrimination in an AI system before it’s done damage. Companies are fearful that forced transparency could reveal trade secrets, including in potential litigation, in this hyper-competitive new field.

AI companies also pushed for, and generally received, a provision that only allows the attorney general, not citizens, to file lawsuits under the new law. Enforcement details have been left up to the attorney general.

While larger AI companies have more or less been on board with these proposals, a group of smaller Colorado-based AI companies said the requirements might be manageable by behemoth AI companies, but not by budding startups.

“We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”

All agreed, along with many AI companies, that what’s formally called “algorithmic discrimination” is critical to tackle. But they said the bill as written falls short of that goal. Instead, they proposed beefing up existing anti-discrimination laws.

Chowdhury worries that lawsuits are too costly and time consuming to be an effective enforcement tool, and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed accredited, independent organization that can explicitly test for potential bias in an AI algorithm.

“You can understand and deal with a single person who is discriminatory or biased,” said Chowdhury. “What do we do when it’s embedded into the entire institution?”

Australian researchers unveil device that harvests water from the air

SYDNEY — A device that absorbs water from air to produce drinkable water was officially launched in Australia Wednesday.

Researchers say the so-called Hydro Harvester, capable of producing up to 1,000 liters of drinkable water a day, could be “lifesaving during drought or emergencies.”

The device absorbs water from the atmosphere. Solar energy or heat that is harnessed from, for example, industrial processes are used to generate hot, humid air. That is then allowed to cool, producing water for drinking or irrigation.

The Australian team said that unlike other commercially available atmospheric water generators, their invention works by heating air instead of cooling it.

Laureate Professor Behdad Moghtaderi, a chemical engineer and director of the University of Newcastle’s Centre for Innovative Energy Technologies, told VOA how the technology operates.  

“Hydro Harvester uses an absorbing material to absorb and dissolve moisture from air. So essentially, we use renewable energy, let’s say, for instance, solar energy or waste heat. We basically produce super saturated, hot, humid air out of the system,” Moghtaderi said. “When you condense water contained in that air you would have the drinking water at your disposal.”

The researchers say the device can produce enough drinking water each day to sustain a small rural town of up to 400 people. It could also help farmers keep livestock alive during droughts.

Moghtaderi says the technology could be used in parts of the world where water is scarce.

Researchers were motivated by the fact that Australia is an arid and dry country.

“More than 2 billion people around the world, they are in a similar situation where they do not have access to, sort of, high-quality water and they deal with water scarcity,” Moghtaderi said

Trials of the technology will be conducted in several remote Australian communities this year.

The World Economic Forum, an international research organization, says “water scarcity continues to be a pervasive global challenge.”

It believes that atmospheric water generation technology is a “promising emergency solution that can immediately generate drinkable water using moisture in the air.”

However, it cautions that generally the technology is not cheap, and estimates that one mid-sized commercial unit can cost between $30,000 and $50,000.

 

Researchers use artificial intelligence to classify brain tumors

SYDNEY — Researchers in Australia and the United States say that a new artificial intelligence tool has allowed them to classify brain tumors more quickly and accurately.  

The current method for identifying different kinds of brain tumors, while accurate, can take several weeks to produce results.  The method, called DNA methylation-based profiling, is not available at many hospitals around the world.

To address these challenges, a research team from the Australian National University, in collaboration with the National Cancer Institute in the United States, has developed a way to predict DNA methylation, which acts like a switch to control gene activity.  

This allows them to classify brain tumors into 10 major categories using a deep learning model.

This is a branch of artificial intelligence that teaches computers to process data in a way that is inspired by a human brain.

The joint U.S.-Australian system is called DEPLOY and uses microscopic pictures of a patient’s tissue called histopathology images.

The researchers see the DEPLOY technology as complementary to an initial diagnosis by a pathologist or physician.

Danh-Tai Hoang, a research fellow at the Australian National University, told VOA that AI will enhance current diagnostic methods that can often be slow.

“The technique is very time consuming,” Hoang said. “It is often around two to three weeks to obtain a result from the test, whereas patients with high-grade brain tumors often require treatment as soon as possible because time is the goal for brain tumor(s), so they need to get treatment as soon as possible.”

The research team said its AI model was validated on large datasets of approximately 4,000 patients from across the United States and Europe and an accuracy rate of 95 percent.

Their study has been published in the journal Nature Medicine.

Companies trying to attract more smartphone users across Africa, but there are risks

Accra, Ghana — Anita Akpeere prepared fried rice in her kitchen in Ghana’s capital as a flurry of notifications for restaurant orders lit up apps on her phone. “I don’t think I could work without a phone in my line of business,” she said, as requests came in for her signature dish, a traditional fermented dumpling.

Internet-enabled phones have transformed many lives, but they can play a unique role in sub-Saharan Africa, where infrastructure and public services are among the world’s least developed, said Jenny Aker, a professor who studies the issue at Tufts University. At times, technology in Africa has leapfrogged gaps, including providing access to mobile money for people without bank accounts.

Despite growing mobile internet coverage on the continent of 1.3 billion people, just 25% of adults in sub-Saharan Africa have access to it, according to Claire Sibthorpe, head of digital inclusion at the U.K.-based mobile phone lobbying group GSMA. Expense is the main barrier. The cheapest smartphone costs up to 95% of the monthly salary for the poorest 20% of the region’s population, Sibthorpe said.

Literacy rates that are below the global average, and lack of services in many African languages — some 2,000 are spoken across the continent, according to The African Language Program at Harvard University — are other reasons why a smartphone isn’t a compelling investment for some.

“If you buy a car, it’s because you can drive it,” said Alain Capo-Chichi, chief executive of CERCO Group, a company that has developed a smartphone that functions through voice command and is available in 50 African languages such as Yoruba, Swahili and Wolof.

Even in Ghana, where the lingua franca is English, knowing how to use smartphones and apps can be a challenge for newcomers.

One new company in Ghana is trying to close the digital gap. Uniti Networks offers financing to help make smartphones more affordable and coaches users to navigate its platform of apps.

For Cyril Fianyo, a 64-year-old farmer in Ghana’s eastern Volta region, the phone has expanded his activities beyond calls and texts. Using his identity card, he registered with Uniti, putting down a deposit worth 340 Ghanaian Cedis ($25) for a smartphone and will pay the remaining 910 Cedis ($66) in installments.

He was shown how to navigate apps that interested him, including a third-party farming app called Cocoa Link that offers videos of planting techniques, weather information and details about the challenges of climate change that have affected cocoa and other crops.

Fianyo, who previously planted according to his intuition and rarely interacts with farming advisors, was optimistic that the technology would increase his yields.

“I will know the exact time to plant because of the weather forecast,” he said.

Kami Dar, chief executive of Uniti Networks, said the mobile internet could help address other challenges including accessing health care. The company has launched in five communities across Ghana with 650 participants and wants to reach 100,000 users within five years.

Aker, the scholar, noted that the potential impact of mobile phones across Africa is immense but said there is limited evidence that paid health or agriculture apps are benefiting people there. She asserted that the only beneficial impacts are reminders to take medicine or get vaccinated.

Having studied agricultural apps and their impact, she said it doesn’t seem that farmers are getting better prices or improving their income.

Capo-Chichi from CERCO Group said a dearth of useful apps and content is another reason that more people in Africa aren’t buying smartphones.

Dar said Uniti Networks learns from mistakes. In a pilot in northern Ghana designed to help cocoa farmers contribute to their pensions, there was high engagement, but farmers didn’t find the app user-friendly and needed extra coaching. After the feedback, the pension provider changed the interface to improve navigation.

Others are finding benefit with Uniti’s platform. Mawufemor Vitor, a church secretary in Hohoe, said one health app has assisted her to track her menstruation to help prevent pregnancy. And Fianyo, the farmer, has used the platform to find information on herbal medicine.

But mobile phones are no substitute for investment in public services and infrastructure, Aker said.

She also expressed concerns about the privacy of data in the hands of private technology providers and governments. With digital IDs in development in African nations such as Kenya and South Africa, this could pave the way for further abuses, Aker said.

Uniti Networks is a for-profit business, paid for each customer that signs up for paying apps. Dar asserted that he was not targeting vulnerable populations to sell them unnecessary services and said Uniti only features apps that align with its idea of impact, with a focus on health, education, finance and agriculture.

Dar said Uniti has rejected lucrative approaches from many companies including gambling firms. “Tech can be used for awful things,” he said.

He acknowledged that Uniti tracks users on the platform to provide incentives, in the form of free data, and to provide feedback to app developers. He acknowledged that users’ health and financial data could be at threat from outside attack but said Uniti has decentralized data storage in an attempt to lessen the risk.

Still, the potential to provide solutions can outweigh the risks, Aker said, noting two areas where the technology could be transformative: education and insurance.

She said mobile phones could help overcome the illiteracy that still affects 773 million people worldwide according to UNESCO. Increased access to insurance, still not widely used in parts of Africa, could provide protection to millions who face shocks on the front lines of climate change and conflict.

Back in Fianyo’s fields, his new smartphone has attracted curiosity. “This is something I would like to be part of,” said neighboring farmer Godsway Kwamigah.

Blue Origin flies thrill seekers to space, including oldest astronaut 

Washington — After a nearly two year hiatus, Blue Origin flew adventurers to space on Sunday including a former Air Force pilot who was denied the chance to be the United States’ first Black astronaut decades ago. 

 

It was the first crewed launch for the enterprise owned and founded by Amazon billionaire Jeff Bezos since a rocket mishap in 2022 left rival Virgin Galactic as the sole operator in the fledgling suborbital tourism market. 

 

Six people including the sculptor Ed Dwight, who was on track to become NASA’s first ever astronaut of color in the 1960s before being controversially spurned, launched around 09:36 am local time (1436 GMT) from the Launch Site One base in west Texas, a live feed showed. 

 

Dwight — at 90 years, 8 months and 10 days — became the oldest person to ever go to space. 

 

“This is a life-changing experience, everybody needs to do this,” he exclaimed after the flight. 

 

Dwight added: “I thought I didn’t really need this in my life,” reflecting on his omission from the astronaut corps, which was his first experience with failure as a young man. “But I lied,” he said with a hearty laugh. 

 

Mission NS-25 is the seventh human flight for Blue Origin, which sees short jaunts on the New Shepard suborbital vehicle as a stepping stone to greater ambitions, including the development of a full-fledged heavy rocket and lunar lander. 

 

To date, the company has flown 31 people aboard New Shepard — a small, fully reusable rocket system named after Alan Shepard, the first American in space. 

The program encountered a setback when a New Shepard rocket caught fire shortly after launch on September 12, 2022, even though the uncrewed capsule ejected safely. 

 

A federal investigation revealed an overheating engine nozzle was at fault. Blue Origin took corrective steps and carried out a successful uncrewed launch in December 2023, paving the way for Sunday’s mission. 

 

After liftoff, the sleek and roomy capsule separated from the booster, which produces zero carbon emissions. The rocket performed a precision vertical landing. 

 

As the spaceship soared beyond the Karman Line, the internationally recognized boundary of space 100 kilometers above sea level, passengers had the chance to marvel at the Earth’s curvature and unbuckle their seatbelts to float — or somersault — during a few minutes of weightlessness. 

 

The capsule then reentered the atmosphere, deploying its parachutes for a desert landing in a puff of sand. However, one of the three parachutes failed to fully inflate, possibly resulting in a harder landing than expected. 

 

Bezos himself was on the program’s first ever crewed flight in 2021. A few months later, Star Trek’s William Shatner blurred the lines between science fiction and reality when he became the world’s oldest ever astronaut aged 90, decades after he first played a space traveler. 

 

Dwight, who was almost two months older than Shatner at the time of his flight, became only the second nonagenarian to venture beyond Earth. 

 

Astronaut John Glenn remains the oldest to orbit the planet, a feat he achieved in 1998 at the age 77 aboard the Space Shuttle Discovery. 

 

Blue Origin’s competitor in suborbital space is Virgin Galactic, which deploys a supersonic spaceplane that is dropped from beneath the wings of a massive carrier plane at high altitude. 

 

Virgin Galactic experienced its own two-year safety pause because of an anomaly linked with the 2021 flight that carried its founder British tycoon Richard Branson into space. But the company later hit its stride with half a dozen successful flights in quick succession. 

 

Sunday’s mission finally gave Dwight the chance he was denied decades ago. 

 

He was an elite test pilot when he was appointed by President John F Kennedy to join a highly competitive Air Force program known as a pathway for the astronaut corps, but was ultimately not picked. 

 

He left the military in 1966, citing the strain of racial politics, before dedicating his life to telling Black history through sculpture. His art, displayed around the country, includes iconic figures like Martin Luther King Jr, Frederick Douglass, Harriet Tubman and more. 

Musk, Indonesian health minister, launch Starlink for health sector 

DENPASAR, BALI, INDONESIA — Elon Musk and Indonesian Health Minister Budi Gunadi Sadikin launched SpaceX’s satellite internet service for the nation’s health sector on Sunday, aiming to improve access in remote parts of the sprawling archipelago.   

Musk, the billionaire head of SpaceX and Tesla TSLA.O, arrived on the Indonesian resort island of Bali by private jet before attending the launch ceremony at a community health centre in the provincial capital, Denpasar.   

Musk, wearing a green batik shirt, said the availability of the Starlink service in Indonesia would help millions in far-flung parts of the country to access the internet. The country is home to more than 270 million people and three different time zones.

“I’m very excited to bring connectivity to places that have low connectivity,” Musk said, “If you have access to the internet you can learn anything.”   

Starlink was launched at three Indonesian health centers on Sunday, including two in Bali and one on the remote island of Aru in Maluku.   

A video presentation screened at the launch showed how high internet speeds enabled the real-time input of data to better tackle health challenges such as stunting and malnutrition.   

Asked about whether he planned to also invest in Indonesia’s electric vehicle industry, Musk said he was focused on Starlink first.   

“We are focusing this event on Starlink and the benefits that connectivity brings to remote islands,” he said, “I think it’s really to emphasize the importance of internet connectivity, how much of that can be a lifesaver.”   

Indonesia’s government has been trying for years to lure Musk’s auto firm Tesla to build manufacturing plants related to electric vehicles as the government wants to develop its EV sector using the country’s rich nickel resources.   

The tech tycoon is scheduled to meet Indonesian President Joko Widodo on Monday, where he will also address the World Water Forum taking place on the island.   

Communications Minister Budi Arie Setiadi, who also attended the Bali launch, said Starlink was now available commercially, but the government would focus its services first for outer and underdeveloped regions.   

Prior to Sunday’s launch, Starlink obtained a permit to operate as an internet service provider for retail consumers and had been given the go-ahead to provide networks, having received a very small aperture terminal (VSAT) permit, Budi Setiadi told Reuters.   

SpaceX’s Starlink, which owns around 60% of the roughly 7,500 satellites orbiting earth, is dominant in the satellite internet sphere.   

Indonesia is the third country in Southeast Asia where Starlink will operate. Malaysia issued the firm a license to provide internet services last year and a Philippine-based firm signed a deal with SpaceX in 2022.   

Starlink is also used extensively in Ukraine, where it is employed by the military, hospitals, businesses and aid organizations. 

Illness took away her voice. AI created a replica she carries in her phone

PROVIDENCE, RHODE ISLAND — The voice Alexis “Lexi” Bogan had before last summer was exuberant.

She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.

Then that voice was gone.

Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “Hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.

In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.

She types a few words or sentences into her phone and the app instantly reads it aloud.

“Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.

Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.

It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.

But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.

“We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.

“We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”

Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.

Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.

“It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.

The tumor’s location and severity coupled with the complexity of the 10-hour surgery damaged Bogan’s control of her tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.

“It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.

The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.

“At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”

Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.

Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.

“The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”

Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.

Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.

They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.

When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.

“I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.

“I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”

She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.

She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.

Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.

A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.

“We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”

Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”

Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.

“Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”

While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.

She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.

For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

Changes from Visa mean Americans will carry fewer credit, debit cards

new york — Your wallet may soon be getting thinner.

Visa on Wednesday announced major changes to how credit and debit cards will operate in the U.S. in the coming months and years.

The new features could mean Americans will be carrying fewer physical cards in their wallets, and will make the 16-digit credit or debit card number printed on every card increasingly irrelevant.

They will be some of the biggest changes to how payments operate in the U.S. since the U.S. rolled out chip-embedded cards several years ago. They also come as Americans have many more options to pay for purchases beyond “credit or debit,” including buy now, pay later companies, peer-to-peer payment options, paying directly with a bank, or digital payment systems such as Apple Pay.

“I think (with these features) we’re getting past the point where consumers may never need to manually enter an account number ever again,” said Mark Nelsen, Visa’s global head of consumer payments.

The biggest change coming for Americans will be the ability for banks to issue one physical payment card that will be connected to multiple bank accounts. That means no more carrying, for example, a Bank of America or Chase debit card as well as their respective credit cards in a physical wallet. Americans will be able to set criteria with their bank — such as having all purchases below $100 or with a certain merchant applied to the debit card, while other purchases go on the credit card.

The feature, already being used in Asia, will be available this summer. Buy now, pay later company Affirm is the first of Visa’s customers to roll out the feature in the U.S.

Fraud prompts changes

Some of Visa’s new features are in response to online-payments fraud, which continues to increase as more countries adopt digital payments. The company based in San Francisco, California, estimates that payment fraud happens roughly seven times more often online than it does in person, and there are now billions of stolen credit and debit card numbers available to criminals.

Other new elements are also in response to features that non-payments companies have rolled out in recent years. The Apple Card, which uses Mastercard as its payment network, does not come with a printed 16-digit account number and Apple Card users can request a fresh credit card number at any time without having to dispose of the physical card.

Visa executives see a future where banks will issue cards where the 16-digit account number, if the new cards come with them, is largely symbolic.

Soon, fingerprints can approve transactions

Among the other updates unveiled by Visa are changes to tap-to-pay features. Americans will be able to tap their credit or debit cards to their smartphones to add the card to mobile wallets, instead of using a smartphone’s camera to scan in a card’s information, or tap the card to their smartphones to approve a transaction online. Visa will also start implementing biometrics to approve transactions, similar to how Apple devices use a fingerprint or face scan to approve transactions.

The features will take time to filter down to the banks, which will decide when or what to implement for their customers. But because the banks and credit card companies are Visa’s customers, and issue cards with the Visa label, these are features that the financial institutions have been asking for.

Kenya conference showcases technology to help people with disabilities

In Africa, about 15% of the population faces disability challenges despite advancements in technology. Limited infrastructure and high cost of assistive tech create barriers to digital access, leading to exclusion. A conference in Nairobi this week aims to help change that. Mohammed Yusuf reports.

New Zealand researchers say artificial intelligence could enhance surgery

SYDNEY — Researchers in New Zealand say that artificial intelligence, or AI, can help solve problems for patients and doctors.  

A new study from the University of Auckland says that an emerging area is the use of AI during operations using so-called “computer vision.”

The study, published in the journal Nature Medicine, says that artificial intelligence has the potential to identify abnormalities during operations and to unburden overloaded hospitals by enhancing the monitoring of patients to help them recover after surgery at home.

The New Zealand research details how AI “tools are rapidly maturing for medical applications.”  It asserts that “medicine is entering an exciting phase of digital innovation.”

The New Zealand team is investigating computer vision, which describes a machine’s understanding of videos and images. 

 

Dr. Chris Varghese, a doctoral researcher in the Department of Surgery at the Faculty of Medical and Health Sciences at the University of Auckland, led the AI research team.

He told VOA the technology has great potential.

“The use of AI in surgery is a really emerging field. We are seeing a lot of exciting research looking at what we call computer vision, where AI is trying to learn what surgeons see, what the surgical instruments look like, what the different organs look like, and the potential there is to identify abnormal anatomy or what the safest approach to an operation might be using virtual reality and augmented reality to plan ahead of surgeries, which could be really useful in cutting out cancers and things like that.”

Varghese said doctors in New Zealand are already using AI to help sort through patient backlogs.

 

“We are using automated algorithms to triage really long waiting lists,” he said. “So, getting people prioritized and into clinics ahead of time, based on need, so the right patients are seen at the right time.”

The researchers said there are limitations to the use of artificial intelligence because of concerns about data privacy and ethics.

The report concludes that “numerous apprehensions remain with regard to the integration of AI into surgical practice, with many clinicians perceiving limited scope in a field dominated by experiential” technology.

The study also says that “autonomous robotic surgeons…. is the most distant of the realizable goals of surgical AI systems.”

Biden sharply hikes US tariffs on billions in Chinese chips, cars

WASHINGTON — U.S. President Joe Biden on Tuesday unveiled a bundle of steep tariff increases on an array of Chinese imports including electric vehicles, computer chips and medical products, risking an election-year standoff with Beijing in a bid to woo voters who give his economic policies low marks.

Biden will keep tariffs put in place by his Republican predecessor Donald Trump while ratcheting up others, including a quadrupling of EV duties to over 100%, the White House said in a statement. It cited “unacceptable risks” to U.S. economic security posed by what it considers unfair Chinese practices that are flooding global markets with cheap goods.

The new measures impact $18 billion in Chinese imported goods including steel and aluminum, semiconductors, batteries, critical minerals, solar cells and cranes, the White House said. The announcement confirmed earlier Reuters reporting.

The United States imported $427 billion in goods from China in 2023 and exported $148 billion to the world’s No. 2 economy, according to the U.S. Census Bureau, a trade gap that has persisted for decades and become an ever more sensitive subject in Washington.

“China’s using the same playbook it has before to power its own growth at the expense of others by continuing to invest, despite excess Chinese capacity and flooding global markets with exports that are underpriced due to unfair practices,” White House National Economic Adviser Lael Brainard told reporters on a conference call.

U.S. Trade Representative Katherine Tai said the revised tariffs were justified because China was continuing to steal U.S. intellectual property and in some cases had become “more aggressive” in cyber intrusions targeting American technology.

She said prior “Section 301” tariffs had minimal impact on U.S. economy-wide prices and employment, but had been effective in reducing U.S. imports of Chinese goods, while increasing imports from other countries.

But Tai recommended tariff exclusions for dozens of industrial machinery import categories from China, including 19 for solar product manufacturing equipment.

Even as Biden’s steps fell in line with Trump’s premise that tougher trade measures are warranted, the Democrat took aim at his opponent in November’s election.

The White House said Trump’s 2020 trade deal with China did not increase American exports or boost American manufacturing jobs, and it said the 10% across-the-board tariffs on goods from all points of origin that Trump has proposed would frustrate U.S. allies and raise prices. Trump has floated tariffs of 60% or higher on all Chinese goods.

Administration officials said their measures are “carefully targeted,” combined with domestic investment, plotted with close allies and unlikely to worsen a bout of inflation that has already angered U.S. voters and imperiled Biden’s re-election bid. They also downplayed the risk of retaliation from Beijing.

Biden has struggled to convince voters of the efficacy of his economic policies despite a backdrop of low unemployment and above-trend economic growth. A Reuters/Ipsos poll last month showed Trump had a 7 percentage-point edge over Biden on the economy.

Analysts have warned that a trade tiff could raise costs for EVs overall, hurting Biden’s climate goals and his aim to create manufacturing jobs.

Biden has said he wants to win this era of competition with China but not to launch a trade war that could hurt the mutually dependent economies. He has worked in recent months to ease tensions in one-on-one talks with Chinese President Xi Jinping.

Both 2024 U.S. presidential candidates have sharply departed from the free-trade consensus that once reigned in Washington, a period capped by China’s joining the World Trade Organization in 2001.

China has said the tariffs are counterproductive and risk inflaming tensions. Trump’s broader imposition of tariffs during his 2017-2021 presidency kicked off a tariff war with China.

As part of the long-awaited tariff update, Biden will increase tariffs this year under Section 301 of the Trade Act of 1974 from 25% to 100% on EVs, bringing total duties to 102.5%, from 7.5% to 25% on lithium-ion EV batteries and other battery parts and from 25% to 50% on photovoltaic cells used to make solar panels. “Certain” critical minerals will have their tariffs raised from nothing to 25%.

The tariffs on ship-to-shore cranes will rise to 25% from zero, those on syringes and needles will rise to 50% from nothing now and some personal protective equipment (PPE) used in medical facilities will rise to 25% from as little as 0% now. Shortages in PPE made largely in China hampered the United States’ COVID-19 response.

More tariffs will follow in 2025 and 2026 on semiconductors, whose tariff rate will double to 50%, as well as lithium-ion batteries that are not used in elective vehicles, graphite and permanent magnets as well as rubber medical and surgical gloves.

A step Biden previously announced to raise tariffs on some steel and aluminum products will take effect this year, the White House said.  

A number of lawmakers have called for massive hikes on Chinese vehicle tariffs. There are relatively few Chinese-made light-duty vehicles being imported now. Senate Banking Committee Chairman Sherrod Brown wants the Biden administration to ban Chinese EVs outright, over concerns they pose risks to Americans’ personal data.

U.S. Treasury Secretary Janet Yellen, who warned China in April that its excess production of EVs and solar products was unacceptable, said that such concerns were widely shared by U.S. allies and the actions were “motivated not by anti-China policy but by a desire to prevent damaging economic dislocation from unfair economic practices.” 

US vows to stay ahead of China, using AI for fighter jets, navigation

Washington — Two Air Force fighter jets recently squared off in a dogfight in California. One was flown by a pilot. The other wasn’t.

That second jet was piloted by artificial intelligence, with the Air Force’s highest-ranking civilian riding along in the front seat. It was the ultimate display of how far the Air Force has come in developing a technology with its roots in the 1950s. But it’s only a hint of the technology yet to come.

The United States is competing to stay ahead of China on AI and its use in weapon systems. The focus on AI has generated public concern that future wars will be fought by machines that select and strike targets without direct human intervention. Officials say this will never happen, at least not on the U.S. side. But there are questions about what a potential adversary would allow, and the military sees no alternative but to get U.S. capabilities fielded fast.

“Whether you want to call it a race or not, it certainly is,” said Adm. Christopher Grady, vice chairman of the Joint Chiefs of Staff. “Both of us have recognized that this will be a very critical element of the future battlefield. China’s working on it as hard as we are.”

A look at the history of military development of AI, what technologies are on the horizon and how they will be kept under control:

From machine learning to autonomy

AI’s military roots are a hybrid of machine learning and autonomy. Machine learning occurs when a computer analyzes data and rule sets to reach conclusions. Autonomy occurs when those conclusions are applied to act without further human input.

This took an early form in the 1960s and 1970s with the development of the Navy’s Aegis missile defense system. Aegis was trained through a series of human-programmed if/then rule sets to be able to detect and intercept incoming missiles autonomously, and more rapidly than a human could. But the Aegis system was not designed to learn from its decisions and its reactions were limited to the rule set it had.

“If a system uses ‘if/then’ it is probably not machine learning, which is a field of AI that involves creating systems that learn from data,” said Air Force Lt. Col. Christopher Berardi, who is assigned to the Massachusetts Institute of Technology to assist with the Air Force’s AI development.

AI took a major step forward in 2012 when the combination of big data and advanced computing power enabled computers to begin analyzing the information and writing the rule sets themselves. It is what AI experts have called AI’s “big bang.”

The new data created by a computer writing the rules is artificial intelligence. Systems can be programmed to act autonomously from the conclusions reached from machine-written rules, which is a form of AI-enabled autonomy.

Testing an AI alternative to GPS navigation

Air Force Secretary Frank Kendall got a taste of that advanced warfighting this month when he flew on Vista, the first F-16 fighter jet to be controlled by AI, in a dogfighting exercise over California’s Edwards Air Force Base.

While that jet is the most visible sign of the AI work underway, there are hundreds of ongoing AI projects across the Pentagon.

At MIT, service members worked to clear thousands of hours of recorded pilot conversations to create a data set from the flood of messages exchanged between crews and air operations centers during flights, so the AI could learn the difference between critical messages like a runway being closed and mundane cockpit chatter. The goal was to have the AI learn which messages are critical to elevate to ensure controllers see them faster.

In another significant project, the military is working on an AI alternative to GPS satellite-dependent navigation.

In a future war high-value GPS satellites would likely be hit or interfered with. The loss of GPS could blind U.S. communication, navigation and banking systems and make the U.S. military’s fleet of aircraft and warships less able to coordinate a response.

So last year the Air Force flew an AI program — loaded onto a laptop that was strapped to the floor of a C-17 military cargo plane — to work on an alternative solution using the Earth’s magnetic fields.

It has been known that aircraft could navigate by following the Earth’s magnetic fields, but so far that hasn’t been practical because each aircraft generates so much of its own electromagnetic noise that there has been no good way to filter for just the Earth’s emissions.

“Magnetometers are very sensitive,” said Col. Garry Floyd, director for the Department of Air Force-MIT Artificial Intelligence Accelerator program. “If you turn on the strobe lights on a C-17 we would see it.”

The AI learned through the flights and reams of data which signals to ignore and which to follow and the results “were very, very impressive,” Floyd said. “We’re talking tactical airdrop quality.”

“We think we may have added an arrow to the quiver in the things we can do, should we end up operating in a GPS-denied environment. Which we will,” Floyd said.

The AI so far has been tested only on the C-17. Other aircraft will also be tested, and if it works it could give the military another way to operate if GPS goes down.

Safety rails and pilot speak

 

Vista, the AI-controlled F-16, has considerable safety rails as the Air Force trains it. There are mechanical limits that keep the still-learning AI from executing maneuvers that would put the plane in danger. There is a safety pilot, too, who can take over control from the AI with the push of a button.

The algorithm cannot learn during a flight, so each time up it has only the data and rule sets it has created from previous flights. When a new flight is over, the algorithm is transferred back onto a simulator where it is fed new data gathered in-flight to learn from, create new rule sets and improve its performance.

But the AI is learning fast. Because of the supercomputing speed AI uses to analyze data, and then flying those new rule sets in the simulator, its pace in finding the most efficient way to fly and maneuver has already led it to beat some human pilots in dogfighting exercises.

But safety is still a critical concern, and officials said the most important way to take safety into account is to control what data is reinserted into the simulator for the AI to learn from.

California to use generative AI to improve services, cut traffic jams 

sacramento, california — California could soon deploy generative artificial intelligence tools to help reduce traffic jams, make roads safer and provide tax guidance, among other things, under new agreements announced Thursday as part of Governor Gavin Newsom’s efforts to harness the power of new technologies for public services. 

The state is partnering with five companies to create generative AI tools using technologies developed by tech giants such as Microsoft-backed OpenAI and Google- and Amazon-backed Anthropic that would ultimately help the state provide better services to the public, administration officials said. 

“It is a very good sign that a lot of these companies are putting their focus on using GenAI for governmental service delivery,” said Amy Tong, secretary of government operations for California. 

The companies will start a six-month internal trial in which state workers test and evaluate the tools. The companies will be paid $1 for their proposals. The state, which faces a significant budget deficit, can then reassess whether any tools could be fully implemented under new contracts. All the tools are considered low risk, meaning they don’t interact with confidential data or personal information, an administration spokesperson said. 

Newsom, a Democrat, touts California as a global hub for AI technology, noting 35 of the world’s top 50 AI companies are located in the state. He signed an executive order last year requiring the state to start exploring responsible ways to incorporate generative AI by this summer, with a goal of positioning California as an AI leader.

In January, the state started asking technology companies to come up with generative AI tools for public services. Last month, California was one of the first states to roll out guidelines on when and how state agencies could buy such tools. 

Generative AI, a branch of AI that can create new content such as text, audio and photos, has significant potential to help government agencies become more efficient, but there’s also an urgent need for safeguards to limit risks, state officials and experts said. In New York City, an AI-powered chatbot created by the city to help small businesses was found to dole out false guidance and advise companies to violate the law. The rapidly growing technology has also raised concerns about job losses, misinformation, privacy and automation bias. 

While state governments are struggling to regulate AI in the private sector, many are exploring how public agencies can leverage the powerful technology for public good. California’s approach, which also requires companies to disclose what large language models they use to develop AI tools, is meant to build public trust, officials said. 

The state’s testing of the tools and collecting of feedback from state workers are some of the best practices to limit potential risks, said Meredith Lee, chief technical adviser for the University of California-Berkeley’s College of Computing, Data Science and Society. The challenge is determining how to assure continued testing and learning about the tools’ potential risks after deployment. 

“This is not something where you just work on testing for some small amount of time and that’s it,” Lee said. “Putting in the structures for people to be able to revisit and better understand the deployments further down the line is really crucial.” 

The California Department of Transportation is looking for tools that would analyze traffic data and come up with solutions to reduce highway traffic and make roads safer. The state’s Department of Tax and Fee Administration, which administers more than 40 programs, wants an AI tool to help its call center cut wait times and call length. The state is also seeking technologies to provide non-English speakers information about health and social services benefits in their languages and to streamline the inspection process for health care facilities. 

The tools are to be designed to assist state workers, not replace them, said Nick Maduros, director of the Department of Tax and Fee Administration. 

Call center workers there took more than 660,000 calls last year. The state envisions the AI technology listening along to those calls and pulling up specific tax code information associated with the problems callers describe. Workers  could decide whether to use the information.

Currently, call center workers have to simultaneously listen to the call and manually look up the code, Maduros said. 

“If it turns out it doesn’t serve the public better, then we’re out $1,” Maduros said. “And I think that’s a pretty good deal for the citizens of California.” 

Tong wouldn’t say when a successfully vetted tool would be deployed, but added that the state was moving as fast as it can. 

“The whole essence of using GenAI is it doesn’t take years,” Tong said. “GenAI doesn’t wait for you.”

Technology crushing human creativity? Apple’s new iPad ad has strikes nerve online

NEW YORK — A newly released ad promoting Apple’s new iPad Pro has struck quite a nerve online.

The ad, which was released by the tech giant Tuesday, shows a hydraulic press crushing just about every creative instrument artists and consumers have used over the years — from a piano and record player, to piles of paint, books, cameras and relics of arcade games. Resulting from the destruction? A pristine new iPad Pro.

“The most powerful iPad ever is also the thinnest,” a narrator says at the end of the commercial.

Apple’s intention seems straightforward: Look at all the things this new product can do. But critics have called it tone-deaf — with several marketing experts noting the campaign’s execution didn’t land.

“I had a really disturbing reaction to the ad,” said Americus Reed II, professor of marketing at The Wharton School of the University of Pennsylvania. “I understood conceptually what they were trying to do, but … I think the way it came across is, here is technology crushing the life of that nostalgic sort of joy (from former times).”

The ad also arrives during a time many feel uncertain or fearful about seeing their work or everyday routines “replaced” by technological advances — particularly amid the rapid commercialization of generative artificial intelligence. And watching beloved items get smashed into oblivion doesn’t help curb those fears, Reed and others note.

Several celebrities were also among the voices critical of Apple’s “Crush!” commercial on social media this week.

“The destruction of the human experience. Courtesy of Silicon Valley,” actor Hugh Grant wrote on the social media platform X, in a repost of Apple CEO Tim Cook’s sharing of the ad.

Some found the ad to be a telling metaphor of the industry today — particularly concerns about big tech negatively impacting creatives. Filmmaker Justine Bateman wrote on X that the commercial “crushes the arts.”

Experts added that the commercial marked a notable difference to marketing seen from Apple in the past — which has often taken more positive or uplifting approaches.

“My initial thought was that Apple has become exactly what it never wanted to be,” Vann Graves, executive director of the Virginia Commonwealth University’s Brandcenter, said.

Graves pointed to Apple’s famous 1984 ad introducing the Macintosh computer, which he said focused more on uplifting creativity and thinking outside of the box as a unique individual. In contrast, Graves added, “this (new iPad) commercial says, ‘No, we’re going to take all the creativity in the world and use a hydraulic press to push it down into one device that everyone uses.'”

In a statement shared with Ad Age on Thursday, Apple apologized for the ad. The outlet also reported that Apple no longer plans to run the spot on TV.

“Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world,” Tor Myhren, the company’s vice president of marketing communications, told Ad Age. “Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.”

Cupertino, California-based Apple unveiled its latest generation of iPad Pros and Airs earlier this week in a showcase that lauded new features for both lines. The Pro sports a new thinner design, a new M4 processor for added processing power, slightly upgraded storage and incorporates dual OLED panels for a brighter, crisper display.

Apple is trying to juice demand for iPads after its sales of the tablets plunged 17% from last year during the January-March period. After its 2010 debut helped redefine the tablet market, the iPad has become a minor contributor to Apple’s success. It currently accounts for just 6% of the company’s sales.

Online abuse silences women in Ethiopia, study finds

Addis Ababa, Ethiopia — Research into online abuse and hate speech reveals most women in Ethiopia face gender-targeted attacks across Facebook, Telegram and X.

The abuse and hate speech are prompting many Ethiopian women to withdraw from public life, online and off, according to the recent research.

The Center for Information Resilience, a U.K.-based nonprofit organization, spearheaded the study. The CIR report, released Wednesday, says that women in Ethiopia are on the receiving end of abuse and hate speech across all three social media platforms, with Facebook cited as the worst.

Over 2,000 inflammatory keywords were found in the research, which looked at three Ethiopian languages — Amharic, Afan Oromo and Tigrigna — as well as English. The list is the most comprehensive inflammatory word lexicon in Ethiopia, according to the researchers.

Over 78% of the women interviewed reported feelings of fear or anxiety after experiencing online abuse.

It is highly likely similar problems exist in areas of society that have not been analyzed yet, said Felicity Mulford, editor and researcher at CIR.

“This data can be used by human rights advocates, women’s rights advocates, in their advocacy,” she said. “We believe that it’s incredibly impactful, because even though we’ve only got four languages, it shows some of the [trends] that exist across Ethiopia.”

Online abuse is so widespread in Ethiopia that it has been “normalized to the point of invisibility,” the report’s authors said.

Betelehem Akalework, co-founder of Setaset Power, an Afro-feminist movement in Ethiopia, said her work has opened doors to more-serious, targeted attacks.

“We [were] mentally prepared for it to some extent,” she said. “We [weren’t] surprised that the backlash was that heavy, but then we did not anticipate the gravity of that backlash. So, we took media training, and we took digital security trainings.”

The Ethiopian Human Rights Defenders Center, established three years ago, offers protection for human rights defenders and social media activists in the country.

The center’s program coordinator, Kalkidan Tesfaye, said there must be more initiative from the government in education and policymaking to help women protect themselves from online abuse.

“In our recommendation earlier, we were talking about how the Ministry of Education can incorporate digital safety training … a very essential element to learning about computers or acquiring digital skills,” Tesfaye said.

The researchers also investigated other protected characteristics under Ethiopian law, including ethnicity, religion and race. The findings showed that women face compounded attacks, as they are also often targeted for their ethnicity and religion.