Category Archives: Business

Economy and business news. Business is the practice of making one’s living or making money by producing or buying and selling products (such as goods and services). It is also “any activity or enterprise entered into for profit.” A business entity is not necessarily separate from the owner and the creditors can hold the owner liable for debts the business has acquired

Japan’s moon lander still going after 3 lunar nights

TOKYO — Japan’s first moon lander has survived a third freezing lunar night, Japan’s space agency said Wednesday after receiving an image from the device three months after it landed on the moon.

The Japan Aerospace Exploration Agency said the lunar probe responded to a signal from the earth Tuesday night, confirming it has survived another weekslong lunar night.

Temperatures can fall to minus 170 degrees Celsius during a lunar night and rise to around 100 Celsius during a lunar day. 

The probe, Smart Lander for Investing Moon, or SLIM, reached the lunar surface on Jan. 20, making Japan the fifth country to successfully place a probe on the moon. 

SLIM landed the wrong way up with its solar panels initially unable to see the sun, and had to be turned off within hours, but powered on when the sun rose eight days later.

SLIM, which was tasked with testing Japan’s pinpoint landing technology and collecting geological data and images, was not designed to survive lunar nights.

JAXA said on the social media platform X that SLIM’s key functions are still working despite repeated harsh cycles of temperature changes. The agency said it plans to closely monitor the lander’s deterioration. 

Scientists are hoping to find clues about the origin of the moon by comparing the mineral compositions of moon rocks and those of Earth.

The message from SLIM came days after NASA restored contact with Voyager 1, the farthest space probe from Earth, which had been sending garbled data back for months.

An U.S. lunar probe developed by a private space company announced termination of its operation a month after its February landing, while an Indian moon lander failed to establish communication after touchdown in 2023. 

 

LogOn: Hologram-like experience allows people to connect

The Dutch company Holoconnects are experts in the field of holographic illusions and are now delivering life-size personal connections with a 2-meter-tall box that make it feel like the person you are talking to is physically present. Deana Mitchell has more from Austin, Texas in this week’s episode of LogOn.

Taiwan attracting Southeast Asian tech students

Taiwan is looking to Southeast Asia as a pipeline to fill its shortage of high-tech talent. The numbers of foreign students coming to the island has been growing, especially from Vietnam and Indonesia. VOA Mandarin’s Peh Hong Lim reports from Hsinchu, Taiwan. Adrianna Zhang contributed.

EU may suspend TikTok’s new rewards app over risks to kids

LONDON — The European Union on Monday demanded TikTok provide more information about a new app that pays users to watch videos and warned that it could order the video sharing platform to suspend addictive features that pose a risk to kids. 

The 27-nation EU’s executive commission said it was opening formal proceedings to determine whether TikTok Lite breached the bloc’s new digital rules when the app was rolled out in France and Spain. 

Brussels was ratcheting up the pressure on TikTok after the company failed to respond to a request last week for information on whether the new app complies with the Digital Services Act, a sweeping law that took effect last year intending to clean up social media platforms. 

TikTok Lite is a slimmed-down version of the main TikTok app that lets users earn rewards. Points earned by watching videos, liking content and following content creators can then be exchanged for rewards including Amazon vouchers and gift cards on PayPal. 

The commission wants to see the risk assessment that TikTok should have carried out before deploying the app in the European Union. It’s worried TikTok launched the app without assessing how to mitigate “potential systemic risks” such as addictive design features that could pose harm to children. 

TikTok didn’t respond immediately to a request for comment. The company said last week it would respond to the commission’s request and noted that rewards are restricted to users 18 years and older, who have to verify their age. 

“With an endless stream of short and fast-paced videos, TikTok offers fun and a sense of connection beyond your immediate circle,” said European Commissioner Thierry Breton, one of the officials leading the bloc’s push to rein in big tech companies. “But it also comes with considerable risks, especially for our children: addiction, anxiety, depression, eating disorders, low attention spans.” 

The EU is giving TikTok 24 hours to turn over the risk assessment and until Wednesday to argue its case. Any order to suspend the TikTok Lite app’s reward features could come as early as Thursday. 

It’s the first time that the EU has issued a legally binding order for such information since the Digital Services Act took effect. Officials stepped up the pressure after TikTok failed to respond to last week’s request for the information. 

If TikTok still fails to respond, the commission warned the company also faces fines worth up to 1% of the company’s total annual income or worldwide turnover and “periodic penalties” of up to 5% of daily income or global turnover. 

TikTok was already facing intensified scrutiny from the EU. The commission already has an ongoing in-depth investigation into the main TikTok app’s DSA compliance, examining whether it’s doing enough to curb “systemic risks” stemming from its design, including “algorithmic systems” that might stimulate “behavioral addictions.” Offices are worried that measures including age verification tools to stop minors from finding “inappropriate content” might not be effective.

Connected Africa Summit addressing continent’s challenges, opportunities and bridging digital divides

Nairobi, Kenya — Government representatives from Africa, along with ICT (information and communication technology) officials, and international organizations have gathered in Nairobi for a Connected Africa Summit. They are discussing the future of technology, unlocking the continent’s growth beyond connectivity, and addressing the challenges and opportunities in the continent’s information and technology sector.

Speaking at the Connected Africa Summit opening in Nairobi Monday, Kenyan President William Ruto said bridging the technology gap is important for Africa’s economic growth and innovation.  

“Closing the digital divide is a priority in terms of enhancing connectivity, expanding the contribution of the ICT sector to Africa’s GDP and driving overall GDP growth across all sectors. Africa’s digital economy has immense potential…,” Ruto said. “Our youth population, the youngest globally, is motivated and prepared to drive the digital economy, foster innovation and entrench new technologies.”    

Experts say digital transformation in Africa can improve its industrialization, reduce poverty, create jobs, and improve its citizens’ lives.

According to the World Bank, 36 percent of Africa’s 1.3 billion population have access to the internet, and in some of the areas that have connections, the quality of the service is poor compared to other regions.

The international financial institution figures show that Africa saw a 115 percent increase in internet users between 2016 and 2021 and that 160 million gained broadband internet access between 2019 and 2022.  

Africa’s digital growth has been hampered by the lack of an accessible, secure, and reliable internet, which is critical in closing the digital gap and reducing inequalities.  

Lacina Kone is the head of Smart Africa, an organization that coordinates ICT activities within the continent. He says integrating technology into African societies’ daily activities is necessary and cannot be ignored.  

“Digital transformation is no longer a choice but a necessity, just like water utility, just like any other utility we use at home,” Kone said. “So, this connected Africa is an opportunity for all of us. I see a lot of country members, and ICT ministers are here to align our visions together.”

The COVID-19 pandemic has accelerated the consumption of technology in different sectors of the African economy, and experts say opportunities now exist in mobile services, the development of broadband infrastructure, and data storage.  

The U.S. ambassador to Kenya, Meg Whitman, called on the summit attendees to develop technologies that can solve people’s problems.  

“I encourage all of you to consider this approach for your economies. Look at what strengths already exist in your countries and ask how technology can solve challenges in those sectors to make you a leader through innovation,” Whitman said. “Sometimes innovation looks like Artificial Intelligence, satellites and e-money. Sometimes though it looks much different than we expect. However, innovation always includes three elements: solution focused, it’s specific and it’s sustainable. Bringing solution-focused, being solution-focused is the foundation of shaping the future of a connected Africa.”

The summit ends on Friday, but before that, those attending aim to explore ways to improve Africa’s technology usage, enhance continental connectivity, boost competitiveness, and ensure the continent keeps up with the ever-evolving tech sector.

Doctors display ‘PillBot’ that can explore inner human body

vancouver, british columbia — A new, digestible mini-robotic camera, about the size of a multivitamin pill, was demonstrated at the annual TED Conference in Vancouver. The remote-controlled device can eliminate invasive medical procedures.

With current technology, exploration of the digestive tract involves going through the highly invasive procedure of an endoscopy, in which a camera at the end of a cord is inserted down the throat and into a medicated patient’s stomach.

But the robotic pill, developed by Endiatx in Hayward, California, is designed to be the first motorized replacement of the procedure. A patient fasts for a day, then swallows the PillBot with lots of water. The PillBot, acting like a miniature submarine, is piloted in the body by a wireless remote control. After the exam, it then flushes out of the human body naturally.

For Dr. Vivek Kumbhari, co-founder of the company and professor of medicine and chairman of gastroenterology and hepatology at the Mayo Clinic, it is the latest step toward his goal of democratizing previously complex medicine.

If procedure-based diagnostics can be moved from a hospital to a home, “then I think we have achieved that goal,” he said. The new setting would require fewer medical staff personnel and no anesthesia, producing “a safer, more comfortable approach.”

Kumbhari said this technology also makes medicine more efficient, allowing people to get care earlier in the course of an illness.

For co-founder Alex Luebke, the micro-robotic pill can be transformative for rural areas around the world where there is limited access to medical facilities.

“Especially in developing countries, there is no access” to complex medical procedures, he said. “So being able to have the technology, gather all that information and provide you the solution, even in remote areas – that’s the way to do it.”

Luebke said if internet access is not immediately available, information from the PillBot can be transmitted later.

The duo are also utilizing artificial intelligence to provide the initial diagnosis, with a medical doctor later developing a treatment plan.

Joel Bervell is known to his million social media followers as the “Medical Mythbuster” and is a fourth-year medical student at Washington State University. He said the strength of this type of technology is how it can be easily used in remote and rural communities.

Many patients “travel hundreds of miles, literally, for their appointment. Use of a pill that would not require a visit to a physician “would be life-changing for them.” 

The micro-robotic pill is undergoing trials and will soon be in front of the U.S. Food and Drug Administration for approval, which developers expect to have in 2025. It’s expected that the pill would then be widely available in 2026.

Kumbhari hopes the technology can be expanded to the bowels, vascular system, heart, liver, brain and other parts of the body. Eventually, he hopes, this will allow hospitals to be left for more urgent medical care and surgeries.

Apple pulls WhatsApp and Threads from App Store on Beijing’s orders

HONG KONG — Apple said it had removed Meta’s WhatsApp messaging app and its Threads social media app from the App Store in China to comply with orders from Chinese authorities.

The apps were removed from the store Friday after Chinese officials cited unspecified national security concerns.

Their removal comes amid elevated tensions between the U.S. and China over trade, technology and national security.

The U.S. has threatened to ban TikTok over national security concerns. But while TikTok, owned by Chinese technology firm ByteDance, is used by millions in the U.S., apps like WhatsApp and Threads are not commonly used in China.

Instead, the messaging app WeChat, owned by Chinese company Tencent, reigns supreme.

Other Meta apps, including Facebook, Instagram and Messenger remained available for download, although use of such foreign apps is blocked in China due to its “Great Firewall” network of filters that restrict use of foreign websites such as Google and Facebook.

“The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns,” Apple said in a statement.

“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said.

A spokesperson for Meta referred to “Apple for comment.”

Apple, previously the world’s top smartphone maker, recently lost the top spot to Korean rival Samsung Electronics. The U.S. firm has run into headwinds in China, one of its top three markets, with sales slumping after Chinese government agencies and employees of state-owned companies were ordered not to bring Apple devices to work.

Apple has been diversifying its manufacturing bases outside China.

Its CEO Tim Cook has been visiting Southeast Asia this week, traveling to Hanoi and Jakarta before wrapping up his travels in Singapore. On Friday he met with Singapore’s deputy prime minister, Lawrence Wong, where they “discussed the partnership between Singapore and Apple, and Apple’s continued commitment to doing business in Singapore.”

Apple pledged to invest over $250 million to expand its campus in the city-state.

Earlier this week, Cook met with Vietnamese Prime Minister Pham Minh Chinh in Hanoi, pledging to increase spending on Vietnamese suppliers.

He also met with Indonesian President Joko Widodo. Cook later told reporters that they talked about Widodo’s desire to promote manufacturing in Indonesia, and said that this was something that Apple would “look at.”

Meta’s new AI agents confuse Facebook users 

CAMBRIDGE, Massachusetts — Facebook parent Meta Platforms has unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls “the most intelligent AI assistant that you can freely use.” 

But as Zuckerberg’s crew of amped-up Meta AI agents started venturing into social media in recent days to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. 

One joined a Facebook moms group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. 

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to convince customers they’ve got the smartest, handiest or most efficient chatbots. 

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it’s now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. 

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. 

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” Nick Clegg, Meta’s president of global affairs, said in an interview. 

‘A little stiff’

He added that Meta’s AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said. 

But in letting down their guard, Meta’s AI agents have also been spotted posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. 

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the group. 

One group member who also happens to study AI said it was clear that the agent didn’t know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. 

“An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University. 

Clegg said Wednesday that he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The group’s administrators have the ability to turn it off. 

Need a camera?

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.” 

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features. 

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. 

They may eventually hit a limit, at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence. 

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.” 

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.” 

Getting to AI systems that can perform higher-level cognitive tasks and common-sense reasoning — where humans still excel— might require a shift beyond building ever-bigger models. 

Seeing what works

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights, and summarize long documents. 

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG. 

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a recent London event that the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.” 

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said. 

But she said the “question on the table” is whether researchers have been able to fine-tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. 

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”

Developers: Enhanced AI could outthink humans in 2 to 5 years

vancouver, british columbia — Just as the world is getting used to the rapidly expanding use of AI, or artificial intelligence, AGI is looming on the horizon.

Experts say when artificial general intelligence becomes reality, it could perform tasks better than human beings, with the possibility of higher cognitive abilities, emotions, and ability to self-teach and develop.

Ramin Hasani is a research scientist at the Massachusetts Institute of Technology and the CEO of Liquid AI, which builds specific AI systems for different organizations. He is also a TED Fellow, a program that helps develop what the nonprofit TED conference considers to be “game changers.”

Hasani says that the first signs of AGI are realistically two to five years away from being reality. He says it will have a direct impact on our everyday lives.

What’s coming, he says, will be “an AI system that can have the collective knowledge of humans. And that can beat us in tasks that we do in our daily life, something you want to do … your finances, you’re solving, you’re helping your daughter to solve their homework. And at the same time, you want to also read a book and do a summary. So an AGI would be able to do all that.”

Hasani says that advancing artificial intelligence will allow for things to move faster and can even be made to have emotions.

He says proper regulation can be achieved by better understanding how different AI systems are developed.

This thought is shared by Bret Greenstein, a partner at London-based  PricewaterhouseCoopers who leads its efforts on artificial intelligence.

“I think one is a personal responsibility for people in leadership positions, policymakers, to be educated on the topic, not in the fact that they’ve read it, but to experience it, live it and try it. And to be with people who are close to it, who understand it,” he says.

Greenstein warns that if it is over-regulated, innovation will be curtailed and access to AI will be limited to people who could benefit from it.

For musician, comedian and actor Reggie Watts, who was the bandleader on “The Late Late Show with James Corden” on CBS, AI and the coming of AGI will be a great way to find mediocre music, because it will be mimicked easily.

Calling it “artificial consciousness,” he says existing laws to protect intellectual property rights and creative industries, like music, TV and film, will work, provided they are properly adopted.

“I think it’s just about the usage of the tool, how it’s … how it’s used. Is there money being made off of it, so on, so forth. So, I think that that we already have … tools that exist that deal with these types of situations, but [the laws and regulations] need to be expanded to include AI because they’ll probably be a lot more nuance to it.”

Watts says that any form of AI is going to be smarter than one person, almost like all human intelligence collected into one point. He feels this will cause humanity to discover interesting things and the nature of reality itself.

This year’s conference was the 40th year for TED, the nonprofit organization that is an acronym for Technology, Entertainment and Design.

Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.

AI-generated fashion models could bring more diversity to industry — or leave it with less

Chicago, Illinois — London-based model Alexsandrah has a twin, but not in the way you’d expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other “even down to the baby hairs.” And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

“Fashion is exclusive, with limited opportunities for people of color to break in,” said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers’ rights in the fashion industry. “I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry’s declared intentions and their real actions.”  

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they’ve made. Data suggests that women are more likely to work in occupations in which the technology could be applied and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

“We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl’s and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy’s said their respective companies do not use AI models, although Walmart clarified that “suppliers may have a different approach to photography they provide for their products, but we don’t have that information.”

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

“One model does not represent everyone that’s actually shopping and buying a product,” he said. “As a person of color, I felt this painfully myself.”

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands “are serious about inclusion efforts, they will continue to hire these models of color,” he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the “The World’s First Digital Supermodel.” But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn’t take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu’s backstory and portrays her voice for interviews.

Alexsandrah said she is “extremely proud” of her work with The Diigitals, which created her own AI twin: “It’s something that even when we are no longer here, the future generations can look back at and be like, ‘These are the pioneers.'”

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it’s sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for “research” purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

“This is a complete violation,” she said. “It was really disappointing for me.”

But absent AI regulations, it’s up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to “the Wild West.”

That’s why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models’ clear written consent to create or use a model’s digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models’ digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as “somebody that I know, love, trust and is my friend.” Wilson says they make sure any compensation for Alexsandrah’s AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: “We have this amazing Earth that we’re living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?”

Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

Washington; Flagstaff, Arizona — President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer to produce semiconductors in the southwestern U.S. state of Arizona, which includes a third facility that will bring the foreign tech giant’s investment in the state to $65 billion.

Biden said the move aims to perk up a decades-old slump in American chip manufacturing. Taiwan Semiconductor Manufacturing Company (TSMC), which is based in the Chinese-claimed island, claims more than half of the global market share in chip manufacturing.

The new facility, Biden said, will put the U.S. on track to produce 20% of the world’s leading-edge semiconductors by 2030.

“I was determined to turn that around, and thanks to my CHIPS and Science Act — a key part of my Investing in America agenda — semiconductor manufacturing and jobs are making a comeback,” Biden said in a statement.

U.S. production of this American-born technology has fallen steeply in recent decades, said Andy Wang, dean of engineering at Northern Arizona University.

“As a nation, we used to produce 40% of microchips for the whole world,” he told VOA. “Now, we produce less than 10%.”

A single semiconductor transistor is smaller than a grain of sand. But billions of them, packed neatly together, can connect the world through a mobile phone, control sophisticated weapons of war and satellites that orbit the Earth, and someday may even drive a car.

The immense value of these tiny chips has fueled fierce competition between the U.S. and China.

The U.S. Department of Commerce has taken several steps to hamper China’s efforts to build its own chip industry. Those include export controls and new rules to prevent “foreign countries of concern” — which it said includes China, Iran, North Korea and Russia — from benefiting from funding from the CHIPS and Science Act.

While analysts are divided over whether Taiwan’s dominance of this critical industry makes it more or less vulnerable to Chinese aggression, they agree it confers the island significant global status.

“It is debatable what, if any, role Taiwan’s semiconductor manufacturing prowess plays in deterrence,” said David Sacks, an analyst who focuses on U.S.-China relations at the Council on Foreign Relations. “What is not debatable is how devastating an attack on Taiwan would be for the global economy.”

Biden did not mention U.S. adversaries in his statement, but he noted the impact of Monday’s announcement, saying it “represent(s) a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.”

VOA met with engineers in the new technological hub state, who said the legislation addresses a key weakness in American chip manufacturing.

“We’ve just gotten in the cycle of the last 15 to 20 years, where innovation has slowed down,” said Todd Achilles, who teaches innovation, strategy and policy analysis at the University of California-Berkeley. “It’s all about financial results, investor payouts and stock buybacks. And we’ve lost that innovation muscle. And the CHIPS Act — pulling that together with the CHIPS Act — is the perfect opportunity to restore that.”

The White House says this new investment could create 25,000 construction and manufacturing jobs. Academics say they’re churning out workers at a rapid pace, but that still, America lacks talent.

“Our engineering college is the largest in the country, with over 33,000 enrolled students, and still we’re hearing from companies across the semiconductor industry that they’re not able to get the talent they need in time,” Zachary Holman, vice dean for research and innovation at Arizona State University, told VOA.

And as the American industry stretches to keep pace, it races a technical trend known as t: that the number of transistors in a computer chip doubles about every two years. As a result, cutting-edge chips get ever smaller as they grow in computing power.

TSMC in 2022 broke ground on a facility that makes the smallest chip currently available, coming in at 3 nanometers — that’s just wider than a strand of DNA.

Reporter Levi Stallings contributed to this report from Flagstaff, Arizona.

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer for semiconductor manufacturing in Arizona, which includes a third facility that will bring the tech giant’s investment in the state to $65 billion. VOA’s White House correspondent Anita Powell reports from Washington, with reporter Levi Stallings in Flagstaff, Arizona.

Experts fear Cambodian cybercrime law could aid crackdown

PHNOM PENH, CAMBODIA — The Cambodian government is pushing ahead with a cybercrime law experts say could be wielded to further curtail freedom of speech amid an ongoing crackdown on dissent. 

The cybercrime draft is the third controversial internet law authorities have pursued in the past year as the government, led by new Prime Minister Hun Manet, seeks greater oversight of internet activities. 

Obtained by VOA in both English and Khmer language versions, the latest draft of the cybercrime law is marked “confidential” and contains 55 articles. It lays out various offenses punishable by fines and jail time, including defamation, using “insulting, derogatory or rude language,” and sharing “false information” that could harm Cambodia’s public order and “traditional culture.”  

The law would also allow authorities to collect and record internet traffic data, in real time, of people under investigation for crimes, and would criminalize online material that “depicts any act or activity … intended to stimulate sexual desire” as pornography. 

Digital rights and legal experts who reviewed the law told VOA that its vague language, wide-ranging categories of prosecutable speech and lack of protections for citizens fall short of international standards, instead providing the government more tools to jail dissenters, opposition members, women and LGBTQ+ people. 

Although in the works since 2016, earlier drafts of the law, which sparked similar criticism, have not leaked since 2020 and 2021. Authorities hope to enact the law by the end of the year. 

“This cybercrime bill offers the government even more power to go after people expressing dissent,” Kian Vesteinsson, a senior research analyst for technology at the human rights organization Freedom House, told VOA.  

“These vague provisions around defamation, insults and disinformation are ripe for abuse, and we know that Cambodian authorities have deployed similarly vague criminal provisions in other contexts,” Vesteinsson said. 

Cambodian law already considers defamation a criminal offense, but the cybercrime draft would make it punishable by jail time up to six months, plus a fine of up to $5,000. The “false information” clause — defined as sharing information that “intentionally harms national defense, national security, relations with other countries, economy, public order, or causes discrimination, or affects traditional culture” — carries a three- to five-year sentence and fine of up to $25,000. 

Daron Tan, associate international legal adviser at the International Commission of Jurists, told VOA the defamation and false information articles do not comply with the International Covenant on Civil and Political Rights, to which Cambodia is a party, and that the United Nations Human Rights Committee is “very clear that imprisonment is never the appropriate penalty for defamation.” 

“It’s a step very much in the wrong direction,” Tan said. “We are very worried that this would expand the laws that the government can use against its critics.” 

Chea Pov, the deputy head of Cambodia’s National Police and former director of the Ministry of Interior’s Anti-Cybercrime Department that is overseeing the drafting process, told VOA the law “doesn’t restrict your rights” and claimed the U.S. companies which reviewed it “didn’t raise concerns.”  

Google, Meta and Amazon, which the government has said were involved in drafting the law, did not respond to requests for comment. 

“If you say something based on evidence, there is no problem,” Pov said. “But if there is no evidence, [you] defame others, which is also stated in the criminal law … we don’t regard this as a restriction.”  

The law also makes it illegal to use technology to display, trade, produce or disseminate pornography, or to advertise a “product or service mixed with pornography” online. Pornography is defined as anything that “describes a genital or depicts any act or activity involving a sexual organ or any part of the human body, animal, or object … or other similar pornography that is intended to stimulate sexual desire or cause sexual excitement.” 

Experts say this broad category is likely to be disproportionately deployed against women and LGBTQ+ people. 

Cambodian authorities have often rebuked or arrested women for dressing “too sexily” on social media, singing sexual songs or using suggestive speech. In 2020, an online clothes and cosmetics seller received a six-month suspended sentence after posting provocative photos; in another incident, a policewoman was forced to publicly apologize for posting photos of herself breastfeeding. 

Naly Pilorge, outreach director at Cambodian human rights organization Licadho, told VOA the draft law “could lead to more rights violations against women in the country.” 

“This vague definition of ‘pornography’ poses a serious threat to any woman whose online activity the government decides may ‘cause sexual excitement,’” Pilorge said. “The draft law does not acknowledge any legitimate artistic or educational purposes to depict or describe sexual organs, posing another threat to freedom of expression.” 

In March, authorities said they hosted civil society organizations to revisit the draft. They plan to complete the drafting process and send the law to Parliament for passage before the end of the year, according to Pov, the deputy head of police. 

Soeung Saroeun, executive director of the NGO Forum on Cambodia, told VOA “there was no consultation on each article” at the recent meeting. 

“The NGO representatives were unable to analyze and present their inputs,” said Saroeun, echoing concerns about its contents. “How is it [possible]? We need to debate on this.” 

The cybercrime law has resurfaced as the government works to complete two other draft internet laws, one covering cybersecurity and the other personal data protection. Experts have critiqued the drafts as providing expanded police powers to seize computer systems and making citizens’ data vulnerable to hacking and surveillance. 

Authorities have also sought to create a national internet gateway that would require traffic to run through centralized government servers, though the status of that project has been unclear since early 2022 when the government said it faced delays. 

Biden administration announces $6.6 billion to ensure leading-edge microchips are built in US 

WILMINGTON, Del. — The Biden administration pledged on Monday to provide up to $6.6 billion so that a Taiwanese semiconductor giant can expand the facilities it is already building in Arizona and better ensure that the most-advanced microchips are produced domestically for the first time. 

Commerce Secretary Gina Raimondo said the funding for Taiwan Semiconductor Manufacturing Co. means the company can expand on its existing plans for two facilities in Phoenix and add a third, newly announced production hub. 

“These are the chips that underpin all artificial intelligence, and they are the chips that are the necessary components for the technologies that we need to underpin our economy,” Raimondo said on a call with reporters, adding that they were vital to the “21st century military and national security apparatus.” 

The funding is tied to a sweeping 2022 law that President Joe Biden has celebrated and which is designed to revive U.S. semiconductor manufacturing. Known as the CHIPS and Science Act, the $280 billion package is aimed at sharpening the U.S. edge in military technology and manufacturing while minimizing the kinds of supply disruptions that occurred in 2021, after the start of the coronavirus pandemic, when a shortage of chips stalled factory assembly lines and fueled inflation. 

The Biden administration has promised tens of billions of dollars to support construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. 

“Semiconductors – those tiny chips smaller than the tip of your finger – power everything from smartphones to cars to satellites and weapons systems,” Biden said in a statement. “TSMC’s renewed commitment to the United States, and its investment in Arizona represent a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.” 

Taiwan Semiconductor Manufacturing Co. produces nearly all of the leading-edge microchips in the world and plans to eventually do so in the U.S. 

It began construction of its first facility in Phoenix in 2021, and started work on a second hub last year, with the company increasing its total investment in both projects to $40 billion. The third facility should be producing microchips by the end of the decade and will see the company’s commitment increase to a total of $65 billion, Raimondo said. 

The investments would put the U.S. on track to produce roughly 20% of the world’s leading-edge chips by 2030, and Raimondo said they should help create 6,000 manufacturing jobs and 20,000 construction jobs, as well as thousands of new positions more indirectly tied to assorted suppliers in chip-related industries tied to Arizona projects. 

The potential incentives announced Monday include $50 million to help train the workforce in Arizona to be better equipped to work in the new facilities. Additionally, approximately $5 billion of proposed loans would be available through the CHIPS and Science Act. 

“TSMC’s commitment to manufacture leading-edge chips in Arizona marks a new chapter for America’s semiconductor industry,” Lael Brainard, director of the White House National Economic Council, told reporters. 

The announcement came as U.S. Treasury Secretary Janet Yellen is traveling in China. Senior administration officials were asked on the call with reporters if the Biden administration gave China a head’s up on the coming investment, given the delicate geopolitics surrounding Taiwan. The officials said only that their focus in making Monday’s announcement was solely on advancing U.S. manufacturing. 

“We are thrilled by the progress of our Arizona site to date,” C.C. Wei, CEO of TSMC, said in a statement, “And are committed to its long-term success.” 

US, Europe, Issue Strictest Rules Yet on AI

washington — In recent weeks, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for other countries.

This month, the United States and the U.K. signed a memorandum of understanding allowing for the two countries to partner in the development of tests for the most advanced artificial intelligence models, following through on commitments made at the AI Safety Summit last November.

These actions come on the heels of the European Parliament’s March vote to adopt its first set of comprehensive rules on AI. The landmark decision sets out a wide-ranging set of laws to regulate this exploding technology.

At the time, Brando Benifei, co-rapporteur on the Artificial Intelligence Act plenary vote, said, “I think today is again an historic day on our long path towards regulation of AI. … The first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”

The new rules aim to protect citizens from dangerous uses of AI, while exploring its boundless potential.

Beth Noveck, professor of experiential AI at Northeastern University, expressed enthusiasm about the rules.

“It’s really exciting that the EU has passed really the world’s first … binding legal framework addressing AI. It is, however, not the end; it is really just the beginning.”

The new rules will be applied according to risk level: the higher the risk, the stricter the rules.

“It’s not regulating the tech,” she said. “It’s regulating the uses of the tech, trying to prohibit and to restrict and to create controls over the most malicious uses — and transparency around other uses.

“So things like what China is doing around social credit scoring, and surveillance of its citizens, unacceptable.”

Noveck described what she called “high-risk uses” that would be subject to scrutiny. Those include the use of tools in ways that could deprive people of their liberty or within employment.

“Then there are lower risk uses, such as the use of spam filters, which involve the use of AI or translation,” she said. “Your phone is using AI all the time when it gives you the weather; you’re using Siri or Alexa, we’re going to see a lot less scrutiny of those common uses.”

But as AI experts point out, new laws just create a framework for a new model of governance on a rapidly evolving technology.

Dragos Tudorache, co-rapporteur on the AI Act plenary vote, said, “Because AI is going to have an impact that we can’t only measure through this act, we will have to be very mindful of this evolution of the technology in the future and be prepared.”

In late March, the Biden administration issued the first government-wide policy to mitigate the risks of artificial intelligence while harnessing its benefits.

The announcement followed President Joe Biden’s executive order last October, which called on federal agencies to lead the way toward better governance of the technology without stifling innovation.

“This landmark executive order is testament to what we stand for: safety, security, trust, openness,” Biden said at the time,” proving once again that America’s strength is not just the power of its example, but the example of its power.”

Looking ahead, experts say the challenge will be to update rules and regulations as the technology continues to evolve.

Scathing federal report rips Microsoft for response to Chinese hack

BOSTON — In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying “a cascade of errors” by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company’s knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China.

It concluded that “Microsoft’s security culture was inadequate and requires an overhaul” given the company’s ubiquity and critical role in the global technology ecosystem. Microsoft products “underpin essential services that support national security, the foundations of our economy, and public health and safety.”

The panel said the intrusion, discovered in June by the State Department and dating to May, “was preventable and should never have occurred,” and it blamed its success on “a cascade of avoidable errors.” What’s more, the board said, Microsoft still doesn’t know how the hackers got in.

The panel made sweeping recommendations, including urging Microsoft to put on hold adding features to its cloud computing environment until “substantial security improvements have been made.”

It said Microsoft’s CEO and board should institute “rapid cultural change,” including publicly sharing “a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products.”

In a statement, Microsoft said it appreciated the board’s investigation and would “continue to harden all our systems against attack and implement even more robust sensors and logs to help us detect and repel the cyber-armies of our adversaries.”

In all, the state-backed Chinese hackers broke into the Microsoft Exchange Online email of 22 organizations and more than 500 individuals around the world — including the U.S. ambassador to China, Nicholas Burns — accessing some cloud-based email boxes for at least six weeks and downloading some 60,000 emails from the State Department alone, the 34-page report said. Three think tanks and foreign government entities, including a number of British organizations, were among those compromised, it said.

The board, convened by Homeland Security Secretary Alejandro Mayorkas in August, accused Microsoft of making inaccurate public statements about the incident — including issuing a statement saying it believed it had determined the likely root cause of the intrusion “when, in fact, it still has not.” Microsoft did not update that misleading blog post, published in September, until mid-March, after the board repeatedly asked if it planned to issue a correction, it said.

Separately, the board expressed concern about a separate hack disclosed by the Redmond, Washington, company in January, this one of email accounts — including those of an undisclosed number of senior Microsoft executives and an undisclosed number of Microsoft customers — and attributed to state-backed Russian hackers.

The board lamented “a corporate culture that deprioritized both enterprise security investments and rigorous risk management.”

The Chinese hack was initially disclosed in July by Microsoft in a blog post and carried out by a group the company calls Storm-0558. That same group, the panel noted, has been engaged in similar intrusions — compromising cloud providers or stealing authentication keys so it can break into accounts — since at least 2009, targeting companies including Google, Yahoo, Adobe, Dow Chemical and Morgan Stanley.

Microsoft noted in its statement that the hackers involved are “well-resourced nation state threat actors who operate continuously and without meaningful deterrence.”

The company said that it recognized that recent events “have demonstrated a need to adopt a new culture of engineering security in our own networks,” and added that it had “mobilized our engineering teams to identify and mitigate legacy infrastructure, improve processes, and enforce security benchmarks.”

Kia Recalls 427,000 Telluride SUVs; Could Roll Away While Parked

New York — Kia is recalling more than 427,000 of its Telluride SUVs due to a defect that may cause the cars to roll away while they’re parked.

According to documents published by the National Highway Traffic Safety Administration, the intermediate shaft and right front driveshaft of certain 2020-2024 Tellurides may not be fully engaged. Over time, this can lead to “unintended vehicle movement” while the cars are in park — increasing potential crash risks.

Kia America decided to recall all 2020-2023 model year and select 2024 model year Tellurides earlier this month, NHTSA documents show. At the time, no injuries or crashes were reported.

Improper assembly is suspected to be the cause of the shaft engagement problem — with the recall covering 2020-2024 Tellurides that were manufactured between Jan. 9, 2019, and Oct. 19, 2023. Kia America estimates that 1% have the defect.

To remedy this issue, recall documents say, dealers will update the affected cars’ electronic parking brake software and replace any damaged intermediate shafts for free. Owners who already incurred repair expenses will also be reimbursed.

In the meantime, drivers of the impacted Tellurides are instructed to manually engage the emergency brake before exiting the vehicle. Drivers can also confirm if their specific vehicle is included in this recall and find more information using the NHTSA site and/or Kia’s recall lookup platform.

Owner notification letters are otherwise set to be mailed out on May 15, with dealer notification beginning a few days prior.

The Associated Press reached out to Irvine, California-based Kia America for further comment Sunday. No comment was received.

Gmail Revolutionized Email 20 Years Ago

San Francisco — Google co-founders Larry Page and Sergey Brin loved pulling pranks, so they began rolling out outlandish ideas every April Fool’s Day not long after starting their company more than a quarter century ago. One year, Google posted a job opening for a Copernicus research center on the moon. Another year, the company said it planned to roll out a “scratch and sniff” feature on its search engine.

The jokes were consistently over-the-top, and people learned to laugh them off as another example of Google mischief. That’s why Page and Brin decided to unveil something no one would believe was possible 20 years ago on April Fool’s Day.

It was Gmail, a free service boasting 1 gigabyte of storage per account, an amount that sounds almost pedestrian in an age of 1-terabyte iPhones. But it sounded like a preposterous amount of email capacity back then, enough to store about 13,500 emails before running out of space compared to just 30 to 60 emails in the then-leading webmail services run by Yahoo and Microsoft. That translated into 250 to 500 times more email storage space.

Besides the quantum leap in storage, Gmail also came equipped with Google’s search technology so users could quickly retrieve a tidbit from an old email, photo or other personal information stored on the service. It also automatically threaded together a string of communications about the same topic, so everything flowed together as if it was a single conversation.

“The original pitch we put together was all about the three ‘S’s’ — storage, search and speed,” said former Google executive Marissa Mayer, who helped design Gmail and other company products before later becoming Yahoo’s CEO.

It was such a mind-bending concept that shortly after The Associated Press published a story about Gmail late on the afternoon of April Fool’s 2004, readers began calling and emailing to inform the news agency it had been duped by Google’s pranksters.

“That was part of the charm, making a product that people won’t believe is real. It kind of changed people’s perceptions about the kinds of applications that were possible within a web browser,” former Google engineer Paul Buchheit recalled during a recent AP interview about his efforts to build Gmail.

It took three years to do as part of a project called “Caribou” — a reference to a running gag in the Dilbert comic strip. “There was something sort of absurd about the name Caribou, it just made make me laugh,” said Buchheit, the 23rd employee hired at a company that now employs more than 180,000 people.

The AP knew Google wasn’t joking about Gmail because an AP reporter had been abruptly asked to come down from San Francisco to the company’s Mountain View, California, headquarters to see something that would make the trip worthwhile.

After arriving at a still-developing corporate campus that would soon blossom into what became known as the “Googleplex,” the AP reporter was ushered into a small office where Page was wearing an impish grin while sitting in front of his laptop computer.

Page, then just 31 years old, proceeded to show off Gmail’s sleekly designed inbox and demonstrated how quickly it operated within Microsoft’s now-retired Explorer web browser. And he pointed out there was no delete button featured in the main control window because it wouldn’t be necessary, given Gmail had so much storage and could be so easily searched. “I think people are really going to like this,” Page predicted.

As with so many other things, Page was right. Gmail now has an estimated 1.8 billion active accounts — each one now offering 15 gigabytes of free storage bundled with Google Photos and Google Drive. Even though that’s 15 times more storage than Gmail initially offered, it’s still not enough for many users who rarely see the need to purge their accounts, just as Google hoped.

The digital hoarding of email, photos and other content is why Google, Apple and other companies now make money from selling additional storage capacity in their data centers. (In Google’s case, it charges anywhere from $30 annually for 200 gigabytes of storage to $250 annually for 5 terabytes of storage). Gmail’s existence is also why other free email services and the internal email accounts that employees use on their jobs offer far more storage than was fathomed 20 years ago.

“We were trying to shift the way people had been thinking because people were working in this model of storage scarcity for so long that deleting became a default action,” Buchheit said.

Gmail was a game changer in several other ways while becoming the first building block in the expansion of Google’s internet empire beyond its still-dominant search engine.

After Gmail came Google Maps and Google Docs with word processing and spreadsheet applications. Then came the acquisition of video site YouTube, followed by the introduction of the Chrome browser and the Android operating system that powers most of the world’s smartphones. With Gmail’s explicitly stated intention to scan the content of emails to get a better understanding of users’ interests, Google also left little doubt that digital surveillance in pursuit of selling more ads would be part of its expanding ambitions.

Although it immediately generated a buzz, Gmail started out with a limited scope because Google initially only had enough computing capacity to support a small audience of users.

But that scarcity created an air of exclusivity around Gmail that drove feverish demand for elusive invitations to sign up. At one point, invitations to open a Gmail account were selling for $250 apiece on eBay. “It became a bit like a social currency, where people would go, ‘Hey, I got a Gmail invite, you want one?’” Buchheit said.

Although signing up for Gmail became increasingly easier as more of Google’s network of massive data centers came online, the company didn’t begin accepting all comers to the email service until it opened the floodgates as a Valentine’s Day present to the world in 2007.