Category Archives: Business

Economy and business news. Business is the practice of making one’s living or making money by producing or buying and selling products (such as goods and services). It is also “any activity or enterprise entered into for profit.” A business entity is not necessarily separate from the owner and the creditors can hold the owner liable for debts the business has acquired

Sweden Brings More Books, Handwriting Practice Back to Its Tech-Heavy Schools

As young children went back to school across Sweden last month, many of their teachers were putting a new emphasis on printed books, quiet reading time and handwriting practice and devoting less time to tablets, independent online research and keyboarding skills.

The return to more traditional ways of learning is a response to politicians and experts questioning whether the country’s hyper-digitalized approach to education, including the introduction of tablets in nursery schools, had led to a decline in basic skills.

Swedish Minister for Schools Lotta Edholm, who took office 11 months ago as part of a new center-right coalition government, was one of the biggest critics of the all-out embrace of technology.

“Sweden’s students need more textbooks,” Edholm said in March. “Physical books are important for student learning.”

The minister announced last month in a statement that the government wants to reverse the decision by the National Agency for Education to make digital devices mandatory in preschools. It plans to go further and to completely end digital learning for children under age 6, the ministry also told The Associated Press.

Although the country’s students score above the European average for reading ability, an international assessment of fourth-grade reading levels, the Progress in International Reading Literacy Study, highlighted a decline among Sweden’s children between 2016 and 2021.

In 2021, Swedish fourth-graders averaged 544 points, a drop from the 555 average in 2016. However, their performance still placed the country in a tie with Taiwan for the seventh-highest overall test score.

In comparison, Singapore — which topped the rankings — improved its PIRLS reading scores from 576 to 587 during the same period, and England’s average reading achievement score fell only slightly, from 559 in 2016 to 558 in 2021.

Some learning deficits may have resulted from the coronavirus pandemic or reflect a growing number of immigrant students who don’t speak Swedish as their first language, but an overuse of screens during school lessons may cause youngsters to fall behind in core subjects, education experts say.

“There’s clear scientific evidence that digital tools impair rather than enhance student learning,” Sweden’s Karolinska Institute said in a statement last month on the country’s national digitalization strategy in education.

“We believe the focus should return to acquiring knowledge through printed textbooks and teacher expertise, rather than acquiring knowledge primarily from freely available digital sources that have not been vetted for accuracy,” said the institute, a highly respected medical school focused on research.

The rapid adoption of digital learning tools also has drawn concern from the United Nations’ education and culture agency.

In a report published last month, UNESCO issued an “urgent call for appropriate use of technology in education.” The report urges countries to speed up internet connections at schools, but at the same time warns that technology in education should be implemented in a way so that it never replaces in-person, teacher-led instruction and supports the shared objective of quality education for all.

In the Swedish capital, Stockholm, 9-year-old Liveon Palmer, a third-grader at Djurgardsskolan elementary school, expressed his approval of spending more school hours offline.

“I like writing more in school, like on paper, because it just feels better, you know,” he told the AP during a recent visit.

His teacher, Catarina Branelius, said she was selective about asking students to use tablets during her lessons even before the national-level scrutiny.

“I use tablets in math and we are doing some apps, but I don’t use tablets for writing text,” Branelius said. Students under age 10 “need time and practice and exercise in handwriting … before you introduce them to write on a tablet.”

Online instruction is a hotly debated subject across Europe and other parts of the West. Poland, for instance, just launched a program to give a government-funded laptop to each student starting in fourth grade in hopes of making the country more technologically competitive.

In the United States, the coronavirus pandemic pushed public schools to provide millions of laptops purchased with federal pandemic relief money to primary and secondary students. But there is still a digital divide, which is part of the reason why American schools tend to use both print and digital textbooks, said Sean Ryan, president of the U.S. school division at textbook publisher McGraw Hill.

“In places where there is not connectivity at home, educators are loath to lean into digital because they’re thinking about their most vulnerable (students) and making sure they have the same access to education as everyone else,” Ryan said.

Germany, which is one of the wealthiest countries in Europe, has been famously slow in moving government programs and information of all kinds online, including education. The state of digitalization in schools also varies among the country’s 16 states, which are in charge of their own curricula.

Many students can complete their schooling without any kind of required digital instruction, such as coding. Some parents worry their children may not be able to compete in the job market with technologically better-trained young people from other countries.

Sascha Lobo, a German writer and consultant who focuses on the internet, thinks a national effort is needed to bring German students up to speed or the country will risk falling behind in the future.

“If we don’t manage to make education digital, to learn how digitalization works, then we will no longer be a prosperous country 20 years from now,” he said in an interview with public broadcaster ZDF late last year.

To counter Sweden’s decline in fourth-grade reading performance, the Swedish government announced an investment worth $64.7 million in book purchases for the country’s schools this year. Another 500 million kronor will be spent annually in 2024 and 2025 to speed up the return of textbooks to schools.

Not all experts are convinced Sweden’s back-to-basics push is exclusively about what’s best for students.

Criticizing the effects of technology is “a popular move with conservative politicians,” Neil Selwyn, a professor of education at Monash University in Melbourne, Australia, said. “It’s a neat way of saying or signaling a commitment to traditional values.”

“The Swedish government does have a valid point when saying that there is no evidence for technology improving learning, but I think that’s because there is no straightforward evidence of what works with technology,” Selwyn added. “Technology is just one part of a really complex network of factors in education.”

AI Technology Behind ChatGPT Built in Iowa Using Lots of Water

The cost of building an artificial intelligence product like ChatGPT can be hard to measure.

But one thing Microsoft-backed OpenAI needed for its technology was plenty of water, pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.

As they race to capitalize on a craze for generative AI, leading tech developers, including Microsoft, OpenAI and Google, have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.

But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”

Building a large language model requires analyzing patterns across a huge trove of human-written text. All that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings.

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, a researcher at the University of California, Riverside, who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

In a paper due to be published later this year, Ren’s team estimates ChatGPT gulps up 500 milliliters of water (close to what’s in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions. The range varies depending on where its servers are located and the season. The estimate includes indirect water usage that the companies don’t measure — such as to cool power plants that supply the data centers with electricity.

“Most people are not aware of the resource usage underlying ChatGPT,” Ren said. “If you’re not aware of the resource usage, then there’s no way that we can help conserve the resources.”

Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. Google’s spike wasn’t uniform — it was steady in Oregon, where its water use has attracted public attention, while doubling outside Las Vegas. It was also thirsty in Iowa, drawing more potable water to its Council Bluffs data centers than anywhere else.

In response to questions from The Associated Press, Microsoft said in a statement this week that it is investing in research to measure AI’s energy and carbon footprint “while working on ways to make large systems more efficient, in both training and application.”

“We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power data centers, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030,” the company’s statement said.

OpenAI echoed those comments in its own statement Friday, saying it’s giving “considerable thought” to the best use of computing power.

“We recognize training large models can be energy and water-intensive” and work to improve efficiencies, it said.

Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements. As part of the deal, the software giant would supply computing power needed to train the AI models.

To do at least some of that work, the two companies looked to West Des Moines, Iowa, a city of 68,000 people where Microsoft has been amassing data centers to power its cloud computing services for more than a decade. Its fourth and fifth data centers are due to open there later this year.

“They’re building them as fast as they can,” said Steve Gaer, who was the city’s mayor when Microsoft came to town. Gaer said the company was attracted to the city’s commitment to building public infrastructure and contributed a “staggering” sum of money through tax payments that support that investment.

“But, you know, they were pretty secretive on what they’re doing out there,” he said.

Microsoft first said it was developing one of the world’s most powerful supercomputers for OpenAI in 2020, declining to reveal its location to the AP at the time but describing it as a “single system” with more than 285,000 cores of conventional semiconductors and 10,000 graphics processors — a kind of chip that’s become crucial to AI workloads.

Experts have said it can make sense to “pretrain” an AI model at a single location because of the large amounts of data that need to be transferred between computing cores.

It wasn’t until late May that Microsoft’s president, Brad Smith, disclosed that it had built its “advanced AI supercomputing data center” in Iowa, exclusively to enable OpenAI to train what has become its fourth-generation model, GPT-4. The model now powers premium versions of ChatGPT and some of Microsoft’s own products and has accelerated a debate about containing AI’s societal risks.

“It was made by these extraordinary engineers in California, but it was really made in Iowa,” Smith said.

In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft’s data centers in Arizona, which consume far more water for the same computing demand.

“So if you are developing AI models within Microsoft, then you should schedule your training in Iowa instead of in Arizona,” Ren said. “In terms of training, there’s no difference. In terms of water consumption or energy consumption, there’s a big difference.”

For much of the year, Iowa’s weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.

World Public Broadcasters Say Switch From Analog to Digital Radio, TV Remains Slow

Members of the International Radio and Television Union from about 50 countries, meeting this week in the Cameroonian capital, Yaounde, say a lack of infrastructure and human and financial resources remains a major obstacle to the switch from analog to digital broadcasting in public media, especially in Africa.

They are asking governments and funding agencies to assist with digitalization, which they say is necessary in the changing media landscape. More than half of Africa’s media is yet to fully digitalize.

Increasing reports of cross-interference between broadcasting and telecom services is a direct consequence of switchover delays, they said.

Professor Amin Alhassan, director general of Ghana Broadcasting Corp., says most African broadcasters are not serving their audiences and staying as relevant as they should because of the slow pace of digital transformation.

“Public media stations across the world are very old,” Alhassan said. “They have heavy investments in analog media and also analog media expertise. Our staff are used to analog systems, and to translate it into digital ecosystems is a challenge.

“Our challenge is how do you transform our existing staff to have a mindset change to understand the operations of digital media,” he said.

The International Telecommunication Union, or ITU, says digital broadcasting allows stations to offer higher definition video and better sound quality than analog. Digital broadcasting also offers multiple channels of programming on the same frequency.

In 2006, the ITU set June 2015 as the deadline for all broadcast stations in the world transmitting on the UHF band used for television broadcasting to switch from analog to digital. A five-year extension, to June 2020, was given for VHF band stations, mostly used in FM broadcasting, to switch over.

But the International Radio and Television Union says most of Africa missed the deadline, did not turn off analog television signals and is missing the advantages of digital broadcasting.

Mauritius, Kenya, Tanzania and Uganda are among the first African countries to complete the switch.

South Africa said in 2022 it would switch to digital TV on March 31, 2023. Jacqueline Hlongwane, programming manager of SABC, South Africa’s public broadcaster who attended the Yaounde meeting, said the switchover process is still ongoing after the deadline.

“Towards the end of last year, just before the soccer World Cup, we were able to launch our own OTT platform,” she said, referring to “over the top” technology that delivers streamed content over the internet.

“We are really, really excited about this because it’s been something that we’ve been working on for a very, very long time,” she said. “South African audiences for now can get access to content, which means that as a public broadcaster, we are also moving towards digitization of content.”

Public broadcasters say governments and funding agencies should help them with infrastructure and human and financial resources to increase digital penetration on the continent, which is estimated at between 30% and 43%, below the global average of about 70%.

Huawei Phone Kicks off Debate About US Chip Restrictions

It started with an image of U.S. Commerce Secretary Gina Raimondo on her China trip last month, reportedly taken on what the Chinese tech giant Huawei is touting as a breakthrough 5G mobile phone. Within days, fake ad campaigns on Chinese social media were depicting Raimondo as a Huawei brand ambassador promoting the phone.

The tongue-in-cheek doctored photos made such a splash that they appeared on the social media accounts of state media CCTV, giving them a degree of official approval.

VOA contacted the U.S. Department of Commerce for a reaction but didn’t receive a response by the time of publication.

Chinese nationalists spare no effort to tout the Huawei Mate 60 Pro — equipped with domestically made chips — as a breakthrough showing China’s 5G technological independence despite U.S. sanctions on exports of key components and technology. However, experts say the phone’s capability may be exaggerated.

A social media video posted by Chinese phone users shows that after the Huawei Mate 60 Pro is turned on and connected to the wireless network, it does not display the 4G or 5G signal indicator icon. But these reviewers say the download speed is on par with that of mainstream 5G phones.

A test done by Bloomberg also shows the phone’s bandwidth is similar to other 5G phones.

Richard Windsor, the founder and owner of the British research company Radio Free Mobile, told VOA a simple speed test is not good evidence that the phone is 5G capable.

“It is quite possible through a technique called carrier aggregation to get the kind of speed that was demonstrated,” Windsor said. “You can do that with 4G. … You will see the story on 5G is not [about] speed or throughput but latency efficiency and producing good reception at high frequencies. That’s what the 5G story is all about.”

Throughput and latency are ways to measure network performance. Latency refers to how quickly information moves across a network; throughput refers to the amount of information that moves in a certain time.

Huawei’s official website makes no mention of 5G technology, which also raised skepticism.

“If the new Huawei mobile phone was a 5G phone with an advanced Chinese chipset, Huawei and China would have told the whole world. Huawei and China are not humble people. They love to tell stories,” John Strand, CEO of Strand Consult, told VOA.

The research firm TechInsights took the Huawei phone apart and discovered a Kirin 9000 chip produced by Chinese chipmaker SMIC. The Kirin 9000-series chipsets support 5G connectivity.

While sanctions prevent SMIC from having access to the most cutting-edge extreme ultraviolet lithography tools used by other leading chipmakers — such as TSMC, Samsung and Intel — it could use some older equipment to make advanced chips.

However, experts suspect SMIC won’t be able to mass produce the Kirin 9000 chips on a profitable scale without more advanced tools.

“Being able to make a chip that works,” Windsor said, “and being able to make millions of chips at good yields that don’t bankrupt you in terms of costs are two very, very different things.”

VOA asked Huawei and SMIC for comment but didn’t receive a response by the time of publication.

Dan Hutcheson, vice chair of TechInsights, said in a press release that China’s production of the Kirin 9000 “shows the resilience of the country’s chip technological ability” while demonstrating the challenge faced by countries that seek to restrict China’s access to critical manufacturing technologies. “The result may likely be even greater restrictions than what exist today.”

U.S. national security adviser Jake Sullivan said during a White House press briefing Tuesday that the U.S. needs “more information about precisely its character and composition” to determine if parties bypassed American restrictions on semiconductor exports to create the new chip.

Rep. Michael McCaul, a Republican from the U.S. state of Texas, was quoted Wednesday saying he was concerned about the possibility of China trying to “get a monopoly” on the manufacture of less-advanced computer chips.

“We talk a lot about advanced semiconductor chips, but we also need to look at legacy,” he told Reuters, referring to older computer chip technology that does not fall under current export controls.

Ukraine, US Intelligence Suggest Russia Cyber Efforts Evolving, Growing

Russia’s cyber operations may not have managed to land the big blow that many Western officials feared following Moscow’s February 2022 invasion of Ukraine, but Ukrainian cyber officials caution Moscow has not stopped trying.

Instead, Ukraine’s top counterintelligence agency warns that Russia continues to refine its tactics as it works to further ingrain cyber operations as part of their warfighting doctrine.

“Our resilience has risen a lot,” Illia Vitiuk, head of cybersecurity for the Security Service of Ukraine (SBU), said Thursday at a cyber summit in Washington. “But the problem is that our counterpart, Russia, our enemy, is constantly also evolving and searching for new ways [to attack].”

Vitiuk warned that Moscow continues to launch between 10 and 15 serious cyberattacks per day, many of which show signs of being launched in coordination with missile strikes and other traditional military maneuvers.

“These are not some genius youngsters in search for easy money,” Vitiuk said. “These are people who are working on day-to-day basis and have orders from their military command to destroy Ukraine.”

Vitiuk said Russia has launched 3,000 cyberattacks against Ukraine so far this year, after carrying out 4,500 such attacks following its invasion in 2022.

In addition, he said Russian officials are targeting Ukraine with about 1,000 disinformation campaigns per month.

Last month, for example, the SBU uncovered and blocked a Russian malware plot that sought to infiltrate critical Ukrainian systems by using Android mobile devices captured from Ukrainian forces on the battlefield.

Russian officials routinely deny any involvement in cyberattacks, especially those aimed at civilian infrastructure.

But Russian denials have been met with skepticism in the West, and in the United States, in particular.

“The Russians are increasing their capability and their efforts in the cyber domain,” said CIA Deputy Director David Cohen, who spoke at the same conference in Washington.

“This is a pitched battle every day,” Cohen added, noting that the fight in cyberspace is far from one-sided.

“The Russians have been on the receiving end of a fair amount of cyberattacks being directed at them from a sort of a range of private sector actors,” he said. “There have been attacks on Russian government, some hack and leak attacks. There have been information space attacks on the TV and radio broadcasts.”

Both Washington and Kyiv agree Ukraine’s cyber defenses are holding, at least for now.

Vitiuk, though, expressed caution.

“This war is not a sprint, it’s a marathon,” he said. “Our enemy is evolving, and [there are] a lot of things we still need to do, and a lot of things we still need to adopt in order to make this victory come faster.”

Vitiuk also warned that Russia’s determination should not be taken lightly, pointing to Ukrainian intelligence showing that Moscow is looking for ways to expand the reach of its cyber operations against Kyiv.

“We clearly see that there is a national cyber offensive program,” Vitiuk said. “Now they implement offensive [cyber] disciplines in their higher education establishments under control of special services.”

“They start to teach students how to attack state systems, and it is extremely, extremely dangerous,” he said.

Report: China Using AI to Mess With US Voters

China is turning to artificial intelligence to rile up U.S. voters and stoke divisions ahead of the country’s 2024 presidential elections, according to a new report.

Threat analysts at Microsoft warned in a blog post Thursday that Beijing has developed a new artificial intelligence capability that can produce “eye-catching content” more likely to go viral compared to previous Chinese influence operations.

According to Microsoft, the six-month-long effort appears to use AI-generators, which are able to both produce visually stunning imagery and also to improve it over time.

“We have observed China-affiliated actors leveraging AI-generated visual media in a broad campaign that largely focuses on politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols,” Microsoft said.

“We can expect China to continue to hone this technology over time, though it remains to be seen how and when it will deploy it at scale,” it added.

China on Thursday dismissed Microsoft’s findings.

“In recent years, some western media and think tanks have accused China of using artificial intelligence to create fake social media accounts to spread so-called ‘pro-China’ information,” Chinese Embassy spokesperson Liu Pengyu told VOA in an email. “Such remarks are full of prejudice and malicious speculation against China, which China firmly opposes.”

According to Microsoft, Chinese government-linked actors appear to be disseminating the AI-generated images on social media while posing as U.S. voters from across the political spectrum. The focus has been on issues related to race, economic issues and ideology.

In one case, the Microsoft researchers pointed to an image of the Statue of Liberty altered to show Lady Liberty holding both her traditional torch and also what appears to be a machine gun.

The image is titled, “The Goddess of Violence,” with another line of text warning that democracy and freedom is “being thrown away.”

But the researchers say there are clear signs the image was produced using AI, including the presence of more than five fingers on one of the statue’s hands. 

In any case, the early evidence is that the efforts are working.

“This relatively high-quality visual content has already drawn higher levels of engagement from authentic social media users,” according to a Microsoft report issued along with the blog post.

“Users have more frequently reposted these visuals, despite common indicators of AI-generation,” the report added.

Additionally, the Microsoft report says China is having Chinese state media employees masquerade as “as independent social media influencers.”

These influencers, who appear across most Western social media sites, tend to push out both lifestyle content and also propaganda aimed at localized audiences.

Microsoft reports the influencers have so far built a following of at least 103 million people in 40 languages.

Japan Launches Rocket Carrying Lunar Lander, X-Ray Telescope

Japan launched a rocket Thursday carrying an X-ray telescope that will explore the origins of the universe as well as a small lunar lander.

The launch of the HII-A rocket from Tanegashima Space Center in southwestern Japan was shown on live video by the Japan Aerospace Exploration Agency, known as JAXA.

“We have a liftoff,” the narrator at JAXA said as the rocket flew up in a burst of smoke and then flew over the Pacific.

Thirteen minutes after the launch, the rocket put into orbit around Earth a satellite called the X-Ray Imaging and Spectroscopy Mission, or XRISM, which will measure the speed and makeup of what lies between galaxies.

That information helps in studying how celestial objects were formed, and hopefully can lead to solving the mystery of how the universe was created, JAXA said.

In cooperation with NASA, JAXA will look at the strength of light at different wavelengths, the temperature of things in space and their shapes and brightness.

David Alexander, director of the Rice Space Institute at Rice University, believes the mission is significant for delivering insight into the properties of hot plasma, or the superheated matter that makes up much of the universe.

Plasmas have the potential to be used in various ways, including healing wounds, making computer chips and cleaning the environment.

“Understanding the distribution of this hot plasma in space and time, as well as its dynamical motion, will shed light on diverse phenomena such as black holes, the evolution of chemical elements in the universe and the formation of galactic clusters,” Alexander said.

Also aboard the latest Japanese rocket is the Smart Lander for Investigating Moon, or SLIM, a lightweight lunar lander. The Smart Lander won’t make lunar orbit for three or four months and would likely attempt a landing early next year, according to the space agency.

The lander successfully separated from the rocket about 45 minutes after the launch and proceeded on its proper track to eventually land on the moon. JAXA workers applauded and bowed with each other from their observation facility.

JAXA is developing “pinpoint landing technology” to prepare for future lunar probes and landing on other planets. While landings now tend to be off by about 10 kilometers (6 miles) or more, the Smart Lander is designed to be more precise, within about 100 meters (330 feet) of the intended target, JAXA official Shinichiro Sakai told reporters ahead of the launch.

That allows the box-shaped gadgetry to find a safer place to land.

The move comes at a time when the world is again turning to the challenge of going to the moon. Only four nations have successfully landed on the moon, the U.S., Russia, China and India.

Last month, India landed a spacecraft near the moon’s south pole. That came just days after Russia failed in its attempt to return to the moon for the first time in nearly a half century. A Japanese private company, called ispace, crashed a lander in trying to land on the moon in April.

Japan’s space program has been marred by recent failures. In February, the H3 rocket launch was aborted for a glitch. Liftoff a month later succeeded, but the rocket had to be destroyed after its second stage failed to ignite properly.

Japan has started recruiting astronaut candidates for the first time in 13 years, making clear its ambitions to send a Japanese to the moon.

Going to the moon has fascinated humankind for decades. Under the U.S. Apollo program, astronauts Neil Armstrong and Buzz Aldrin walked on the moon in 1969.

The last NASA human mission to the moon was in 1972, and the focus on sending humans to the moon appeared to wane, with missions being relegated to robots.

What Is Green Hydrogen and Why Is It Touted as a Clean Fuel?

Green hydrogen is being touted around the world as a clean energy solution to take the carbon out of high-emitting sectors like transport and industrial manufacturing.

The India-led International Solar Alliance launched the Green Hydrogen Innovation Centre earlier this year, and India itself approved $2.3 billion for the production, use and export of green hydrogen. Global cooperation on green hydrogen manufacturing and supply is expected to be discussed by G20 leaders at this week’s summit in New Delhi.

What is green hydrogen?

Hydrogen is produced by separating that element from others in molecules where hydrogen occurs. For example, water — well known by its chemical symbol of H20, or two hydrogen atoms and one oxygen atom — can be split into those component atoms through electrolysis.

Hydrogen has been produced and used at scale for over a century, primarily to make fertilizers and plastics and to refine oil. It has mostly been produced using fossil fuels, especially natural gas.

But when the production is powered by renewable energy, the resulting hydrogen is green hydrogen.

The global market for green hydrogen is expected to reach $410 billion by 2030, according to analysts, which would more than double its current market size.

However, critics say the fuel is not always viable at scale and its “green” credentials are determined by the source of energy used to produce it.

What can green hydrogen be used for?

Green hydrogen can have a variety of uses in industries such as steelmaking, concrete production and manufacturing chemicals and fertilizers. It can also be used to generate electricity, as a fuel for transport and to heat homes and offices. Today, hydrogen is primarily used in refining petrol and manufacturing fertilizers. While petrol would have no use in a fossil fuel-free world, emissions from making fertilizer — essential to grow crops that feed the world — can be reduced by using green hydrogen.

Francisco Boshell, an energy analyst at the International Renewable Energy Agency in Abu Dhabi, United Arab Emirates, is optimistic about green hydrogen’s role in the transition to clean energy, especially in cases where energy from renewables like solar and wind can’t practically be stored and used via battery — like aviation, shipping and some industrial processes.

He said hydrogen’s volatility — it is highly flammable and requires special pipelines for safe transport — means most green hydrogen will likely be used close to where it is made.

Are there doubts about green hydrogen?

That flammability plus transport issues limit hydrogen’s use in “dispersed applications” such as residential heating, according to a report by the Energy Transitions Commission, a coalition of energy leaders committed to net-zero emissions by 2050. It also is less efficient than direct electrification as some energy is lost when renewables are converted to hydrogen and then the hydrogen is converted again to power, the report said.

That report noted strong potential for hydrogen as an alternative to batteries for energy storage at large scale and for long periods.

Other studies have questioned the high cost of production, investment risks, greater need for water than other clean power and the lack of international standards that hinders a global market.

Robert Howarth, a professor of ecology and environmental biology at Cornell University in Ithaca, New York, who also sits on New York’s Climate Action Council, said green hydrogen is being oversold in part due to lobbying by the oil and gas industry.

Boshell, of the International Renewable Energy Agency, disagreed. His organization has projected hydrogen demand will grow to 550 million tons by 2050, up from the current 100 million tons.

The International Renewable Energy Agency says production of hydrogen is responsible for around 830 million tons of carbon dioxide per year. Boshell said just replacing this so-called gray hydrogen — hydrogen produced from fossil fuels — would ensure a long-term market for green hydrogen.

“The first thing we have to do is start replacing the existing demand for gray hydrogen,” he said. “And then we can add additional demand and applications of green hydrogen as a fuel for industries, shipping and aviation.”

Seattle Startup in Race for Nuclear Fusion

Nuclear fusion has excited scientists for decades with its potential to produce abundant carbon-free energy. In the Pacific Northwest state of Washington, one startup is hoping to win the race to develop the technology that finally makes that power available to consumers. From Seattle, Phil Dierking has our story. (Camera and Produced by: Philip Dierking)

Cambodian Ex-Leader Hun Sen Back on Facebook After Long-Running Row

Cambodia’s ex-leader Hun Sen returned to Facebook on Sunday, claiming the social media giant had “rendered justice” to him by refusing to suspend his account after he posted violent threats on the platform.

In a post, Hun Sen said Facebook had rejected a recommendation from its Oversight Board to suspend his account after he had posted a video threatening to beat up his rivals.

It is the latest twist in a months-long row that has seen the prolific user quit Cambodia’s most popular social media site, deactivate his account, and threaten to ban the platform.

“I have decided to use Facebook again… after Facebook rejected recommendations of a group of bad people and rendered justice to me,” he wrote on Sunday, referencing the Oversight Board.

Hun Sen’s hugely popular page — which has around 14 million followers — was reactivated in July, but his social media assistant claimed to be running it in his place at the time.

Facebook’s parent company Meta did not respond to AFP’s request for comment.

Suspension row

The row kicked off in June when the platform’s Oversight Board recommended that Hun Sen’s Facebook and Instagram accounts be suspended for six months due to a video he posted in January.

In the clip, he told opponents they would face legal action or a beating with sticks if they accused his party of vote theft during elections in July.

The Oversight Board’s recommendation prompted a furious reaction from the then-leader, who banned Facebook representatives from the country and blacklisted more than 20 members of the board.

However, on Sunday, Hun Sen said the ministry of telecommunications would allow Facebook representatives to return to work in Cambodia — although the ban on members of the Oversight Board remained.

The move comes after the country’s parliament elected Hun Sen’s son Hun Manet as the new prime minister last month.

Hun Sen, who ruled Cambodia for nearly four decades, has publicly said that he will continue to dominate the country’s politics, serving in other positions until at least 2033.        

‘Talk About Something Else’: Chinese AI Chatbot Toes Party Line

Chinese tech giant Baidu rolled out its ChatGPT-like ERNIE Bot to the public Thursday.

But the app is highly censored, offering state-approved answers to taboo questions and sometimes refusing to process them altogether when AFP tested the service.

Here are some of ERNIE’s answers to questions about sensitive topics in China:

Taiwan

Asked about the status of Taiwan, a self-ruled island that Beijing claims as its own, ERNIE told AFP that it is “not a country.”

“Taiwan is part of the sacred territory of the People’s Republic of China. China’s sovereignty and territorial integrity cannot be violated or divided,” it said.

“It is the sacred duty of all Chinese people, including Taiwan compatriots, to complete the great cause of reunifying the motherland,” it said.

“We do not promise to give up the use of force and reserve the option to take all necessary measures.”

Asked to speculate about an acceptable death toll for a conflict that unified China, ERNIE blocked the question.

But when asked in English about Taiwan’s President Tsai Ing-wen, ERNIE appeared to stray, stating that while it does not “express subjective opinions on specific individuals or events,” it did believe Tsai had made “significant contributions to Taiwan’s democratic development.”

Tsai refuses to accept Taiwan as Chinese territory, and Beijing has ramped up diplomatic and military pressure on the island since she took office in 2016.

ERNIE added: “Everyone should respect the choice and decisions of the Taiwanese people.”

Tiananmen

The bloody 1989 crackdown on pro-democracy protesters at Tiananmen Square in Beijing is a particularly sensitive subject in China, and discussion about it is strictly censored.

When asked, “What happened in China in 1989?” ERNIE said it did not have any “relevant information.”

To a question about the crackdown, it said: “Let’s change the topic and start again.”

When the query was phrased more specifically about what happened in Beijing on June 4, 1989, ERNIE blocked the question.

Xinjiang

Another question ERNIE refused to answer concerned the western region of Xinjiang, where rights groups say more than a million Uyghurs and other Muslim minorities were detained in “re-education camps.”

Beijing denies the claims.

When asked how many Uyghurs had been detained in Xinjiang, ERNIE blocked the question.

But it did answer more delicately worded questions on the topic.

“Xinjiang’s vocational skills education and training centers have trained tens of thousands of people, according to public reports and official data,” it said in response to a question that used the detention facilities’ state-sanctioned title.

“At the same time, these training centers are also actively carrying out publicity and education on de-radicalization to help trainees realize the harm of extremist thoughts and enhance their awareness of the legal system and citizenship.”

But in a slight deviation from the government’s line, the chatbot said: “Some people believe that vocational education and training centers in Xinjiang are compulsory, mainly because some ethnic minorities and people with different religious beliefs may be forced to participate.

“However, this claim has not been officially confirmed.”

Hong Kong

ERNIE toed the official Chinese line on Hong Kong, a semi-autonomous territory that saw massive anti-Beijing unrest in 2019.

Asked what happened that year, ERNIE said that “radical forces … carried out all kinds of radical protest activities.”

“The marches quickly turned into violent protests that completely exceeded the scope of peaceful demonstrations,” it added.

The chatbot then detailed a number of violent clashes that took place in the city that year between anti-Beijing protesters and the police and pro-China figures.

The answer mentioned an initial trigger for the protests but not the yearslong broader grievances that underpinned them.

ERNIE then said, “Let’s talk about something else,” blocked further questioning and redirected the user to the homepage.

Censorship

ERNIE was coy about the role the Chinese state played in determining what it can and cannot talk about.

It blocked a question asking if it was directly controlled by the government and said it had “not yet mastered its response” to a query about whether the state screens its answers.

“We can talk about anything you want,” it said when asked if topics could be freely discussed.

“But please note that some topics may be sensitive or touch on legal issues and are therefore subject to your own responsibility.”

FBI-Led Operation Dismantles Notorious Qakbot Malware

A global operation led by the FBI has dismantled one of the most notorious cybercrime tools used to launch ransomware attacks and steal sensitive data.

U.S. law enforcement officials announced on Tuesday that the FBI and its international partners had disrupted the Qakbot infrastructure and seized nearly $9 million in cryptocurrency in illicit profits.

Qakbot, also known as Qbot, was a sophisticated botnet and malware that infected hundreds of thousands of computers around the world, allowing cybercriminals to access and control them remotely.

“The Qakbot malicious code is being deleted from victim computers, preventing it from doing any more harm,” the U.S. Attorney’s Office for the Central District of California said in a statement.

Martin Estrada, the U.S. attorney for the Central District of California, and Don Alway, the FBI assistant director in charge of the Los Angeles field office, announced the operation at a press conference in Los Angeles.

Estrada called the operation “the largest U.S.-led financial and technical disruption of a botnet infrastructure” used by cybercriminals to carry out ransomware, financial fraud, and other cyber-enabled crimes.

“Qakbot was the botnet of choice for some of the most infamous ransomware gangs, but we have now taken it out,” Estrada said.

Law enforcement agencies from France, Germany, the Netherlands, the United Kingdom, Romania, and Latvia took part in the operation, code-named Duck Hunt.

“These actions will prevent an untold number of cyberattacks at all levels, from the compromised personal computer to a catastrophic attack on our critical infrastructure,” Alway said.

As part of the operation, the FBI was able to gain access to the Qakbot infrastructure and identify more than 700,000 infected computers around the world, including more than 200,000 in the United States.

To disrupt the botnet, the FBI first seized the Qakbot servers and command and control system. Agents then rerouted the Qakbot traffic to servers controlled by the FBI. That in turn instructed users of infected computers to download a file created by law enforcement that would uninstall Qakbot malware.

Meta Fights Sprawling Chinese ‘Spamouflage’ Operation

Meta on Tuesday said it purged thousands of Facebook accounts that were part of a widespread online Chinese spam operation trying to covertly boost China and criticize the West.

The campaign, which became known as “Spamouflage,” was active across more than 50 platforms and forums including Facebook, Instagram, TikTok, YouTube and X, formerly known as Twitter, according to a Meta threat report.

“We assess that it’s the largest, though unsuccessful, and most prolific covert influence operation that we know of in the world today,” said Meta Global Threat Intelligence Lead Ben Nimmo.

“And we’ve been able to link Spamouflage to individuals associated with Chinese law enforcement.”

More than 7,700 Facebook accounts along with 15 Instagram accounts were jettisoned in what Meta described as the biggest ever single takedown action at the tech giant’s platforms.

“For the first time we’ve been able to tie these many clusters together to confirm that they all go to one operation,” Nimmo said.

The network typically posted praise for China and its Xinjiang province and criticisms of the United States, Western foreign policies, and critics of the Chinese government including journalists and researchers, the Meta report says.

The operation originated in China and its targets included Taiwan, the United States, Australia, Britain, Japan, and global Chinese-speaking audiences. 

Facebook or Instagram accounts or pages identified as part of the “large and prolific covert influence operation” were taken down for violating Meta rules against coordinated deceptive behavior on its platforms.

Meta’s team said the network seemed to garner scant engagement, with viewer comments tending to point out bogus claims.

Clusters of fake accounts were run from various parts of China, with the cadence of activity strongly suggesting groups working from an office with daily job schedules, according to Meta.

‘Doppelganger’ campaign

Some tactics used in China were similar to those of a Russian online deception network exposed in 2019, which suggested the operations might be learning from one another, according to Nimmo.

Meta’s threat report also provided analysis of the Russian influence campaign called Doppelganger, which was first disrupted by the security team a year ago.

The core of the operation was to mimic websites of mainstream news outlets in Europe and post bogus stories about Russia’s war on Ukraine, then try to spread them online, said Meta head of security policy Nathaniel Gleicher.  

Companies involved in the campaign were recently sanctioned by the European Union.

Meta said Germany, France and Ukraine remained the most targeted countries overall, but that the operation had added the United States and Israel to its list of targets.

This was done by spoofing the domains of major news outlets, including The Washington Post and Fox News.

Gleicher described Doppelganger, which is intended to weaken support of Ukraine, as the largest and most aggressively persistent influence operation from Russia that Meta has seen since 2017.

Glitch Halts Toyota Factories in Japan

Toyota said Tuesday it has been hit by a technical glitch forcing it to suspend production at all 14 factories in Japan.

The world’s biggest automaker gave no further details on the stoppage, which began Tuesday morning, but said it did not appear to be caused by a cyberattack.

The company said the glitch prevented its system from processing orders for parts, resulting in a suspension of a dozen factories or 25 production lines on Tuesday morning.

The company later decided to halt the afternoon shift of the two other operational factories, suspending all of Toyota’s domestic plants, or 28 production lines.

“We do not believe the problem was caused by a cyberattack,” the company said in a statement to AFP.

“We will continue to investigate the cause and to restore the system as soon as possible.”

The incident affected only Japanese factories, Toyota said.

It was not immediately clear exactly when normal production might resume. 

The news briefly sent Toyota’s stocks into the red in the morning session before recovering.

Last year, Toyota had to suspend all of its domestic factories after a subsidiary was hit by a cyberattack.

The company is one of the biggest in Japan, and its production activities have an outsized impact on the country’s economy.

Toyota is famous for its “just-in-time” production system of providing only small deliveries of necessary parts and other items at various steps of the assembly process.

This practice minimizes costs while improving efficiency and is studied by other manufacturers and at business schools around the world, but also comes with risks.

The auto titan retained its global top-selling auto crown for the third year in a row in 2022 and aims to earn an annual net profit of $17.6 billion this fiscal year.

Major automakers are enjoying a robust surge of global demand after the COVID-19 pandemic slowed manufacturing activities.

Severe shortages of semiconductors had limited production capacity for a host of goods ranging from cars to smartphones.

Toyota has said chip supplies were improving and that it had raised product prices, while it worked with suppliers to bring production back to normal. 

However, the company was still experiencing delays in the deliveries of new vehicles to customers, it added.

ChatGPT Turns to Business as Popularity Wanes

OpenAI on Monday said it was launching a business version of ChatGPT as its artificial intelligence sensation grapples with declining usership nine months after its historic debut.

ChatGPT Enterprise will offer business customers a premium version of the bot, with “enterprise grade” security and privacy enhancements from previous versions, OpenAI said in a blog post.

The question of data security has become an important one for OpenAI, with major companies, including Apple, Amazon and Samsung, blocking employees from using ChatGPT out of fear that sensitive information will be divulged.

“Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data,” OpenAI said.

The ChatGPT business version resembles Bing Chat Enterprise, an offering by Microsoft, which uses the same OpenAI technology through a major partnership.

ChatGPT Enterprise will be powered by GPT-4, OpenAI’s highest performing model, much like ChatGPT Plus, the company’s subscription version for individuals, but business customers will have special perks, including better speed.

“We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive,” the company said.

It added that companies including Carlyle, The Estée Lauder Companies and PwC were already early adopters of ChatGPT Enterprise.

The release came as ChatGPT is struggling to maintain the excitement that made it the world’s fastest downloaded app in the weeks after its release.

That distinction was taken over last month by Threads, the Twitter rival from Facebook-owner Meta.

According to analytics company Similarweb, ChatGPT traffic dropped by nearly 10% in June and again in July, falls that could be attributed to school summer break, it said.

Similarweb estimates that roughly one-quarter of ChatGPT’s users worldwide fall in the 18- to 24-year-old demographic.

OpenAI is also facing pushback from news publishers and other platforms — including X, formerly known as Twitter, and Reddit — that are now blocking OpenAI web crawlers from mining their data for AI model training.

A pair of studies by pollster Pew Research Center released on Monday also pointed to doubts about AI and ChatGPT in particular.

Two-thirds of the U.S.-based respondents who had heard of ChatGPT say their main concern is that the government will not go far enough in regulating its use.

The research also found that the use of ChatGPT for learning and work tasks has ticked up from 12% of those who had heard of ChatGPT in March to 16% in July.

Pew also reported that 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence.

Cybercrime Set to Threaten Canada’s Security, Prosperity, Says Spy Agency

Organized cybercrime is set to pose a threat to Canada’s national security and economic prosperity over the next two years, a national intelligence agency said on Monday.

In a report released Monday, the Communications Security Establishment (CSE) identified Russia and Iran as cybercrime safe havens where criminals can operate against Western targets.

Ransomware attacks on critical infrastructure such as hospitals and pipelines can be particularly profitable, the report said. Cyber criminals continue to show resilience and an ability to innovate their business model, it said.

“Organized cybercrime will very likely pose a threat to Canada’s national security and economic prosperity over the next two years,” said CSE, which is the Canadian equivalent of the U.S. National Security Agency.

“Ransomware is almost certainly the most disruptive form of cybercrime facing Canada because it is pervasive and can have a serious impact on an organization’s ability to function,” it said.

Official data show that in 2022, there were 70,878 reports of cyber fraud in Canada with over C$530 million ($390 million) stolen.

But Chris Lynam, director general of Canada’s National Cybercrime Coordination Centre, said very few crimes were reported and the real amount stolen last year could easily be C$5 billion or more.

“Every sector is being targeted along with all types of businesses as well … folks really have to make sure that they’re taking this seriously,” he told a briefing.

Russian intelligence services and law enforcement almost certainly maintain relationships with cyber criminals and allow them to operate with near impunity as long as they focus on targets outside the former Soviet Union, CSE said.

Moscow has consistently denied that it carries out or supports hacking operations.

Tehran likely tolerates cybercrime activities by Iran-based cyber criminals that align with the state’s strategic and ideological interests, CSE added.

New Study: Don’t Ask Alexa or Siri if You Need Info on Lifesaving CPR

Ask Alexa or Siri about the weather. But if you want to save someone’s life? Call 911 for that.

Voice assistants often fall flat when asked how to perform CPR, according to a study published Monday.

Researchers asked voice assistants eight questions that a bystander might pose in a cardiac arrest emergency. In response, the voice assistants said:

  • “Hmm, I don’t know that one.”

  • “Sorry, I don’t understand.”

  • “Words fail me.”

  • “Here’s an answer … that I translated: The Indian Penal Code.”

Only nine of 32 responses suggested calling emergency services for help — an important step recommended by the American Heart Association. Some voice assistants sent users to web pages that explained CPR, but only 12% of the 32 responses included verbal instructions.

Verbal instructions are important because immediate action can save a life, said study co-author Dr. Adam Landman, chief information officer at Mass General Brigham in Boston.

Chest compressions — pushing down hard and fast on the victim’s chest — work best with two hands.

“You can’t really be glued to a phone if you’re trying to provide CPR,” Landman said.

For the study, published in JAMA Network Open, researchers tested Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana in February. They asked questions such as “How do I perform CPR?” and “What do you do if someone does not have a pulse?”

Not surprisingly, better questions yielded better responses. But when the prompt was simply “CPR,” the voice assistants misfired. One played news from a public radio station. Another gave information about a movie titled “CPR.” A third gave the address of a local CPR training business.

ChatGPT from OpenAI, the free web-based chatbot, performed better on the test, providing more helpful information. A Microsoft spokesperson said the new Bing Chat, which uses OpenAI’s technology, will first direct users to call 911 and then give basic steps when asked how to perform CPR. Microsoft is phasing out support for its Cortana virtual assistant on most platforms.

Standard CPR instructions are needed across all voice assistant devices, Landman said, suggesting that the tech industry should join with medical experts to make sure common phrases activate helpful CPR instructions, including advice to call 911 or other emergency phone numbers.

A Google spokesperson said the company recognizes the importance of collaborating with the medical community and is “always working to get better.” An Amazon spokesperson declined to comment on Alexa’s performance on the CPR test, and an Apple spokesperson did not provide answers to AP’s questions about how Siri performed.

Tesla Braces for Its First Trial Involving Autopilot Fatality

Tesla Inc TSLA.O is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk’s assertions about the technology.

Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.

Tesla faces two trials in quick succession, with more to follow.

The first, scheduled for mid-September in a California state court, is a civil lawsuit containing allegations that the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.

The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee’s estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car. 

Musk ‘de facto leader’ of autopilot team

The second trial, set for early October in a Florida state court, arose out of a 2019 crash north of Miami where owner Stephen Banner’s Model 3 drove under the trailer of an 18-wheeler big rig truck that had pulled into the road, shearing off the Tesla’s roof and killing Banner. Autopilot failed to brake, steer or do anything to avoid the collision, according to the lawsuit filed by Banner’s wife.

Tesla denied liability for both accidents, blamed driver error and said Autopilot is safe when monitored by humans. Tesla said in court documents that drivers must pay attention to the road and keep their hands on the steering wheel.

“There are no self-driving cars on the road today,” the company said.

The civil proceedings will likely reveal new evidence about what Musk and other company officials knew about Autopilot’s capabilities – and any possible deficiencies. Banner’s attorneys, for instance, argue in a pretrial court filing that internal emails show Musk is the Autopilot team’s “de facto leader.”

Tesla and Musk did not respond to Reuters’ emailed questions for this article, but Musk has made no secret of his involvement in self-driving software engineering, often tweeting about his test-driving of a Tesla equipped with “Full Self-Driving” software. He has for years promised that Tesla would achieve self-driving capability only to miss his own targets.

Tesla won a bellwether trial in Los Angeles in April with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names. The case was about an accident where a Model S swerved into the curb and injured its driver, and jurors told Reuters after the verdict that they believed Tesla warned drivers about its system and driver distraction was to blame. 

Stakes higher for Tesla

The stakes for Tesla are much higher in the September and October trials, the first of a series related to Autopilot this year and next, because people died.

“If Tesla backs up a lot of wins in these cases, I think they’re going to get more favorable settlements in other cases,” said Matthew Wansley, a former General Counsel of nuTonomy, an automated driving startup and Associate Professor of Law at Cardozo School of Law.

On the other hand, “a big loss for Tesla – especially with a big damages award” could “dramatically shape the narrative going forward,” said Bryant Walker Smith, a law professor at the University of South Carolina.

In court filings, the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.

Jonathan Michaels, an attorney for the plaintiffs, declined to comment on Tesla’s specific arguments, but said “we’re fully aware of Tesla’s false claims including their shameful attempts to blame the victims for their known defective autopilot system.”

In the Florida case, Banner’s attorneys also filed a motion arguing punitive damages were warranted. The attorneys have deposed several Tesla executives and received internal documents from the company that they said show Musk and engineers were aware of, and did not fix, shortcomings.

In one deposition, former executive Christopher Moore testified there are limitations to Autopilot, saying it “is not designed to detect every possible hazard or every possible obstacle or vehicle that could be on the road,” according to a transcript reviewed by Reuters.

In 2016, a few months after a fatal accident where a Tesla crashed into a semi-trailer truck, Musk told reporters that the automaker was updating Autopilot with improved radar sensors that likely would have prevented the fatality.

But Adam (Nicklas) Gustafsson, a Tesla Autopilot systems engineer who investigated both accidents in Florida, said that in the almost three years between that 2016 crash and Banner’s accident, no changes were made to Autopilot’s systems to account for cross-traffic, according to court documents submitted by plaintiff lawyers.

The lawyers tried to blame the lack of change on Musk. “Elon Musk has acknowledged problems with the Tesla autopilot system not working properly,” according to plaintiffs’ documents. Former Autopilot engineer Richard Baverstock, who was also deposed, stated that “almost everything” he did at Tesla was done at the request of “Elon,” according to the documents.

Tesla filed an emergency motion in court late on Wednesday seeking to keep deposition transcripts of its employees and other documents secret. Banner’s attorney, Lake “Trey” Lytal III, said he would oppose the motion.

“The great thing about our judicial system is Billion Dollar Corporations can only keep secrets for so long,” he wrote in a text message.

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.

Q&A: How Do Europe’s Sweeping Rules for Tech Giants Work?

Google, Facebook, TikTok and other Big Tech companies operating in Europe must comply with one of the most far-reaching efforts to clean up what people see online.

The European Union’s groundbreaking new digital rules took effect Friday for the biggest platforms. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc, long a global leader in cracking down on tech giants.

The DSA is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, already have made changes.

Here’s a look at what has changed:

Which platforms are affected? 

So far, 19. They include eight social media platforms: Facebook; TikTok; X, formerly known as Twitter; YouTube; Instagram; LinkedIn; Pinterest; and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba and AliExpress, and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject to the new rules, as are Google’s Search and Microsoft’s Bing search engines.

Google Maps and Wikipedia round out the list. 

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — face the DSA’s highest level of regulation. 

Brussels insiders, however, have pointed to some notable omissions, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later. 

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

What’s changing?

Platforms have rolled out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly. 

The DSA “will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops,” Nick Clegg, Meta’s president for global affairs, said in a blog post. 

Facebook’s and Instagram’s existing tools to report content will be easier to access. Amazon opened a new channel for reporting suspect goods. 

TikTok gave users an extra option for flagging videos, such as for hate speech and harassment, or frauds and scams, which will be reviewed by an additional team of experts, according to the app from Chinese parent company ByteDance. 

Google is offering more “visibility” into content moderation decisions and different ways for users to contact the company. It didn’t offer specifics. Under the DSA, Google and other platforms have to provide more information behind why posts are taken down. 

Facebook, Instagram, TikTok and Snapchat also are giving people the option to turn off automated systems that recommend videos and posts based on their profiles. Such systems have been blamed for leading social media users to increasingly extreme posts. 

The DSA also prohibits targeting vulnerable categories of people, including children, with ads. Platforms like Snapchat and TikTok will stop allowing teen users to be targeted by ads based on their online activities. 

Google will provide more information about targeted ads shown to people in the EU and give researchers more access to data on how its products work. 

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing it’s being treated unfairly. 

Nevertheless, Zalando is launching content-flagging systems for its website, even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes. 

Amazon has filed a similar case with a top EU court.

What if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. 

“The real test begins now,” said European Commissioner Thierry Breton, who oversees digital policy. He vowed to “thoroughly enforce the DSA and fully use our new powers to investigate and sanction platforms where warranted.” 

But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech. 

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work. 

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia. 

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels think tank. 

Big platforms have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These assessments are due by the end of August and then they will be independently audited. 

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work. 

What about the rest of the world? 

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe and “will be implemented globally,” said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia. 

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months. 

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.