All posts by MTechnology

Elon Musk Hopes to Have Twitter CEO Toward the End of Year 

Billionaire Elon Musk said Wednesday that he anticipates finding a CEO for Twitter “probably toward the end of this year.”

Speaking via a video call to the World Government Summit in Dubai, Musk said making sure the platform can function remained the most important thing for him.

“I think I need to stabilize the organization and just make sure it’s in a financial healthy place,” Musk said when asked about when he’d name a CEO. “I’m guessing probably toward the end of this year would be good timing to find someone else to run the company.”

Musk, 51, made his wealth initially on the finance website PayPal, then created the spacecraft company SpaceX and invested in the electric car company Tesla. In recent months, however, more attention has been focused on the chaos surrounding his $44 billion purchase of the microblogging site Twitter.

Meanwhile, the Ukrainian military’s use of Musk’s satellite internet service Starlink as it defends itself against Russia’s ongoing invasion has put Musk off and on at the center of the war.

Musk offered a wide-ranging 35-minute discussion that touched on the billionaire’s fears about artificial intelligence, the collapse of civilization and the possibility of space aliens. But questions about Twitter kept coming back up as Musk described both Tesla and SpaceX as able to function without his direct, day-to-day involvement.

“Twitter is still somewhat a startup in reverse,” he said. “There’s work required here to get Twitter to sort of a stable position and to really build the engine of software engineering.” 

Musk also sought to portray his takeover of San Francisco-based Twitter as a cultural correction. 

“I think that the general idea is just to reflect the values of the people as opposed to imposing the values of essentially San Francisco and Berkeley, which are so somewhat of a niche ideology as compared to the rest of the world,” he said. “And, you know, Twitter was, I think, doing a little too much to impose a niche.”

Musk’s takeover at Twitter has seen mass firings and other cost-cutting measures. Musk, who is on the hook for about $1 billion in yearly interest payments for his purchase, has been trying to find way to maximize profits at the company.

However, some of Musk’s decisions have conflicted with the reasons that journalists, governments and others rely on Twitter as an information-sharing platform.

Musk on Wednesday described the need for users to rely on Twitter for trusted information from verified accounts. However, a confused rollout to a paid verified account system saw some impersonate famous companies, leading to a further withdrawal of needed advertising cash to the site.

“Twitter is certainly quite the rollercoaster,” he acknowledged.

Forbes estimates Musk’s wealth at just under $200 billion. The Forbes analysis ranks Musk as the second-wealthiest person on Earth, just behind French luxury brand magnate Bernard Arnault. 

But Musk also has become a thought leader for some as well, albeit an oracle that is trying to get six hours of sleep a night despite the challenges at Twitter.

Musk described his children as being “programmed by Reddit and YouTube.” However, he criticized the Chinese-made social media app TikTok.

“TikTok has a lot of very high usage (but) I often hear people say, ‘Well, I spent two hours on TikTok, but I regret those two hours,’” Musk said. “We don’t want that to be the case with Twitter.”

TikTok, owned by Beijing-based ByteDance, did not immediately respond to a request for comment. 

Musk warned that artificial intelligence should be regulated “very carefully,” describing it as akin to the promise of nuclear power but the danger of atomic bombs. He also cautioned against having a single civilization or “too much cooperation” on Earth, saying it could “collapse” a society that’s like a “tiny candle in a vast darkness.”

And when asked about the existence of aliens, Musk had a firm response.

“The crazy thing is, I’ve seen no evidence of alien technology or alien life whatsoever. And I think I’d know because of SpaceX,” he said. “I don’t think anybody knows more about space, you know, than me.” 

11 States Consider ‘Right to Repair’ for Farming Equipment

On Colorado’s northeastern plains, where the pencil-straight horizon divides golden fields and blue sky, a farmer named Danny Wood scrambles to plant and harvest proso millet, dryland corn and winter wheat in short, seasonal windows. That is until his high-tech Steiger 370 tractor conks out. 

The tractor’s manufacturer doesn’t allow Wood to make certain fixes himself, and last spring his fertilizing operations were stalled for three days before the servicer arrived to add a few lines of missing computer code for $950. 

“That’s where they have us over the barrel, it’s more like we are renting it than buying it,” said Wood, who spent $300,000 on the used tractor. 

Wood’s plight, echoed by farmers across the country, has pushed lawmakers in Colorado and 10 other states to introduce bills that would force manufacturers to provide the tools, software, parts and manuals needed for farmers to do their own repairs — thereby avoiding steep labor costs and delays that imperil profits. 

“The manufacturers and the dealers have a monopoly on that repair market because it’s lucrative,” said Rep. Brianna Titone, a Democrat and one of the bill’s sponsors. “[Farmers] just want to get their machine going again.” 

In Colorado, the legislation is largely being pushed by Democrats, while their Republican colleagues find themselves stuck in a tough spot: torn between right-leaning farming constituents asking to be able to repair their own machines and the manufacturing businesses that oppose the idea. 

The manufacturers argue that changing the current practice with this type of legislation would force companies to expose trade secrets. They also say it would make it easier for farmers to tinker with the software and illegally crank up the horsepower and bypass the emissions controller — risking operators’ safety and the environment. 

Similar arguments around intellectual property have been leveled against the broader campaign called ‘right to repair,’ which has picked up steam across the country — crusading for the right to fix everything from iPhones to hospital ventilators during the pandemic. 

In 2011, Congress tried passing a right to repair law for car owners and independent servicers. That bill did not pass, but a few years later, automotive industry groups agreed to a memorandum of understanding to give owners and independent mechanics — not just authorized dealerships — access to tools and information to fix problems. 

In 2021, the Federal Trade Commission pledged to beef up its right to repair enforcement at the direction of President Joe Biden. And just last year, Titone sponsored and passed Colorado’s first right to repair law, empowering people who use wheelchairs with the tools and information to fix them. 

For the right to repair farm equipment — from thin tractors used between grape vines to behemoth combines for harvesting grain that can cost over half a million dollars — Colorado is joined by 10 states including Florida, Maryland, Missouri, New Jersey, Texas and Vermont. 

Many of the bills are finding bipartisan support, said Nathan Proctor, who leads Public Interest Research Group’s national right to repair campaign. But in Colorado’s House committee on agriculture, Democrats pushed the bill forward in a 9-4 vote along party lines, with Republicans in opposition even though the bill’s second sponsor is Republican Representative Ron Weinberg. 

“That’s really surprising, and that upset me,” said the Republican farmer Wood. 

Wood’s tractor, which flies an American flag reading “Farmers First,” isn’t his only machine to break down. His grain harvesting combine was dropping into idle, but the servicer took five days to arrive on Wood’s farm — a setback that could mean a hail storm decimates a wheat field or the soil temperature moves beyond the Goldilocks zone for planting. 

“Our crop is ready to harvest and we can’t wait five days, but there was nothing else to do,” said Wood. “When it’s broke down you just sit there and wait and that’s not acceptable. You can be losing $85,000 a day.” 

Representative Richard Holtorf, the Republican who represents Wood’s district and is a farmer himself, said he’s being pulled between his constituents and the dealerships in his district covering the largely rural northeast corner of the state. He voted against the measure because he believes it will financially hurt local dealerships in rural areas and could jeopardize trade secrets. 

“I do sympathize with my farmers,” Holtorf said, but he added, “I don’t think it’s the role of government to be forcing the sale of their intellectual property.”  

At the packed hearing last week that spilled into a second room in Colorado’s Capitol, the core concerns raised in testimony were farmers illegally slipping around the emissions control and cranking up the horsepower. 

“I know growers, if they can change horsepower and they can change emissions they are going to do it,” said Russ Ball, sales manager at 21st Century Equipment, a John Deere dealership in Western states. 

The bill’s proponents acknowledged that the legislation could make it easier for operators to modify horsepower and emissions controls but argued that farmers are already able to tinker with their machines and doing so would remain illegal. 

This January, the Farm Bureau and the farm equipment manufacturer John Deere did sign a memorandum of understanding — a right to repair agreement made in the free market and without government intervention. The agreement stipulates that John Deere will share some parts, diagnostic and repair codes and manuals to allow farmers to make their own fixes. 

The Colorado bill’s detractors laud that agreement as a strong middle ground while Titone said it wasn’t enough, evidenced by six of Colorado’s biggest farmworker associations that support the bill. 

Proctor, who is tracking 20 right to repair proposals in a number of industries across the country, said the memorandum of understanding has fallen far short. 

“Farmers are saying no,” Proctor said. “We want the real thing.” 

China-Owned Parent Company of TikTok Among Top Spenders on Internet Lobbying

ByteDance, the Chinese parent company of social media platform TikTok, has dramatically upped its U.S. lobbying effort since 2020 as U.S.-China relations continue to sour and is now the fourth-largest Internet company in spending on federal lobbying as of last year, according to newly released data.

Publicly available information collected by OpenSecrets, a Washington nonprofit that tracks campaign finance and lobbying data, shows that ByteDance and its subsidiaries, including TikTok, the wildly popular short video app, have spent more than $13 million on U.S. lobbying since 2020. In 2022 alone, Fox News reported, the companies spent $5.4 million on lobbying.

Only Amazon.com ($19.7 million) and the parent companies of Google ($11 million) and Facebook ($19 million) spent more, according to OpenSecrets.

In the fourth quarter of 2022, ByteDance spent $1.2 million on lobbying, according to Fox News.

The lobbyists hired by ByteDance include former U.S. senators Trent Lott and John Breaux; David Urban, a former senior adviser to Donald Trump’s 2016 presidential campaign who was also a former chief of staff for the late Senator Arlen Specter; Layth Elhassani, special assistant to President Barack Obama in the White House Office of Legislative Affairs; and Samantha Clark, former deputy staff director of the U.S. Senate Armed Services Committee.

In November, TikTok hired Jamal Brown, a deputy press secretary at the Pentagon who was national press secretary for Joe Biden’s presidential campaign, to manage policy communications for the Americas, with a focus on the U.S., according to Politico.

“This is kind of the template for how modern tech lobbying goes,” Dan Auble, a senior researcher at Open Secrets, told Vox. “These companies come on the scene and suddenly start spending substantial amounts of money. And ByteDance has certainly done that.”

U.S. officials have criticized TikTok as a security risk due to ties between ByteDance and the Chinese government. The worry is that user data collected by TikTok could be passed to Beijing, so lawmakers have been trying to regulate or even ban the app in the U.S.

In 2019, TikTok paid a $5.7 million fine as part of a settlement with the Federal Trade Commission over violating children’s privacy rights. The Trump administration attempted unsuccessfully to ban downloads of TikTok from app stores and outlaw transactions between Americans and ByteDance.

As of late December, TikTok has been banned on federally managed devices, and 19 states had at least partially blocked the app from state-managed devices.

The number of federal bills that ByteDance has been lobbying on increased to 14 in 2022 from eight in 2020.

With TikTok CEO Shou Zi Chew scheduled to testify before the U.S. House of Representatives Energy and Commerce Committee on March 23, and a House of Representatives Foreign Affairs Committee vote in March on a bill that would ban the use of TikTok in the U.S., the company is expected to further expand its U.S. influence campaign.

Erich Andersen, general counsel and head of corporate affairs at ByteDance and TikTok, told the New York Times in January that “it was necessary for us to accelerate our own explanation of what we were prepared to do and the level of commitments on the national security process.”

TikTok has been met with a mixed response to its efforts to prove that its operations in the U.S. are outside of Beijing’s sphere of influence.

Michael Beckerman, who oversees public policy for the Americas at TikTok, met with Mike Gallagher, chairman of the U.S. House of Representatives Select Committee on China Affairs, on February 1 to explain the company’s U.S. data security plans.

According to Reuters, Gallagher’s spokesperson, Jordan Dunn, said after the meeting that the lawmaker “found their argument unpersuasive.”

Congressman Ken Buck and Senator Josh Hawley on January 25 introduced a bill, No TikTok on United States Devices Act, which will instruct President Joe Biden to use the International Emergency Economic Powers to prohibit downloads of TikTok and ban commercial activity with ByteDance.

Joel Thayer, president of the Digital Progress Institute and a telecom regulation lawyer, told VOA Mandarin that he doubted the Buck-Hawley bill would become law. He said that calls to ban TikTok began during the Trump administration, yet TikTok has remained a visible and influential presence in the U.S.

James Lewis, director of the CSIS Technology and Public Policy Program, told VOA Mandarin, “An outright ban will be difficult because TikTok is speech, which is protected speech. But it [the U.S. government] can ban financial transactions, that’s possible.”

Senators Marco Rubio and Angus King reintroduced bipartisan legislation on February 10 to ban TikTok and other similar apps from operating in the U.S. by “blocking and prohibiting all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern unless they fully divest of dangerous foreign ownership.”

The Committee on Foreign Investment in the United States (CFIUS), an interagency group that reviews transactions involving foreign parties for possible national security threats, ordered ByteDance to divest TikTok in 2020. The two parties have yet to reach an agreement after two years of talks.

Chuck Flint, vice president of strategic relationships at Breitbart News who is also the former chief of staff for Senator Marsha Blackburn, told VOA Mandarin, “I expect that CFIUS will be hesitant to ban TikTok. Anything short of an outright ban will leave China’s TikTok data pipeline in place.”

China experts believe that TikTok wants to reach an agreement with CFIUS rather than being banned from the U.S. or being forced to sell TikTok’s U.S. business to an American company.

Lewis of CSIS said, “Every month that we don’t do CFIUS is a step closer towards some kind of ban.”

Julian Ku, professor of law and faculty director of international programs at Hofstra University, told VOA Mandarin, “The problem is that no matter what they offer, there’s no way to completely shield the data from the Chinese government … as long as there continues to be a shared entity.”

Adrianna Zhang contributed to this report.

Google to Expand Misinformation ‘Prebunking’ in Europe

After seeing promising results in Eastern Europe, Google will initiate a new campaign in Germany that aims to make people more resilient to the corrosive effects of online misinformation.

The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works.

It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies. 

“There’s a real appetite for solutions,” said Beth Goldberg, head of research and development at Jigsaw, an incubator division of Google that studies emerging social challenges. “Using ads as a vehicle to counter a disinformation technique is pretty novel. And we’re excited about the results.”

While belief in falsehoods and conspiracy theories isn’t new, the speed and reach of the internet has given them a heightened power. When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence.

It’s a challenge with few easy solutions. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is another response, but it only drives misinformation elsewhere, while prompting cries of censorship and bias.

Prebunking videos, by contrast, are relatively cheap and easy to produce and can be seen by millions when placed on popular platforms. They also avoid the political challenge altogether by focusing not on the topics of false claims, which are often cultural lightning rods, but on the techniques that make viral misinformation so infectious.

Those techniques include fear-mongering, scapegoating, false comparisons, exaggeration and missing context. Whether the subject is COVID-19, mass shootings, immigration, climate change or elections, misleading claims often rely on one or more of these tricks to exploit emotions and short-circuit critical thinking.

Last fall, Google launched the largest test of the theory so far with a prebunking video campaign in Poland, the Czech Republic and Slovakia. The videos dissected different techniques seen in false claims about Ukrainian refugees. Many of those claims relied on alarming and unfounded stories about refugees committing crimes or taking jobs away from residents.

The videos were seen 38 million times on Facebook, TikTok, YouTube and Twitter — a number that equates to a majority of the population in the three nations. Researchers found that compared to people who hadn’t seen the videos, those who did watch were more likely to be able to identify misinformation techniques, and less likely to spread false claims to others.

The pilot project was the largest test of prebunking so far and adds to a growing consensus in support of the theory.

“This is a good news story in what has essentially been a bad news business when it comes to misinformation,” said Alex Mahadevan, director of MediaWise, a media literacy initiative of the Poynter Institute that has incorporated prebunking into its own programs in countries including Brazil, Spain, France and the U.S.

Mahadevan called the strategy a “pretty efficient way to address misinformation at scale, because you can reach a lot of people while at the same time address a wide range of misinformation.”

Google’s new campaign in Germany will include a focus on photos and videos, and the ease with which they can be presented of evidence of something false. One example: Last week, following the earthquake in Turkey, some social media users shared video of the massive explosion in Beirut in 2020, claiming it was actually footage of a nuclear explosion triggered by the earthquake. It was not the first time the 2020 explosion had been the subject of misinformation.

Google will announce its new German campaign Monday ahead of next week’s Munich Security Conference. The timing of the announcement, coming before that annual gathering of international security officials, reflects heightened concerns about the impact of misinformation among both tech companies and government officials.

Tech companies like prebunking because it avoids touchy topics that are easily politicized, said Sander van der Linden, a University of Cambridge professor considered a leading expert on the theory. Van der Linden worked with Google on its campaign and is now advising Meta, the owner of Facebook and Instagram, as well.

Meta has incorporated prebunking into many different media literacy and anti-misinformation campaigns in recent years, the company told The Associated Press in an emailed statement.

They include a 2021 program in the U.S. that offered media literacy training about COVID-19 to Black, Latino and Asian American communities. Participants who took the training were later tested and found to be far more resistant to misleading COVID-19 claims.

Prebunking comes with its own challenges. The effects of the videos eventually wears off, requiring the use of periodic “booster” videos. Also, the videos must be crafted well enough to hold the viewer’s attention, and tailored for different languages, cultures and demographics. And like a vaccine, it’s not 100% effective for everyone.

Google found that its campaign in Eastern Europe varied from country to country. While the effect of the videos was highest in Poland, in Slovakia they had “little to no discernible effect,” researchers found. One possible explanation: The videos were dubbed into the Slovak language, and not created specifically for the local audience.

But together with traditional journalism, content moderation and other methods of combating misinformation, prebunking could help communities reach a kind of herd immunity when it comes to misinformation, limiting its spread and impact.

“You can think of misinformation as a virus. It spreads. It lingers. It can make people act in certain ways,” Van der Linden told the AP. “Some people develop symptoms, some do not. So: if it spreads and acts like a virus, then maybe we can figure out how to inoculate people.”

Russian Spacecraft Loses Pressure; Space Station Crew Safe

An uncrewed Russian supply ship docked at the International Space Station has lost cabin pressure, the Russian space corporation reported Saturday, saying the incident doesn’t pose any danger to the station’s crew.

Roscosmos said the hatch between the station and the Progress MS-21 had been locked so the loss of pressure didn’t affect the orbiting outpost.

“The temperature and pressure on board the station are within norms and there is no danger to health and safety of the crew,” it said in a statement.

The space corporation didn’t say what may have caused the cargo ship to lose pressure.

Roscosmos noted that the cargo ship had already been loaded with waste before its scheduled disposal. The craft is set to be undocked from the station and deorbit to burn in the atmosphere Feb. 18.

The announcement came shortly after a new Russian cargo ship docked smoothly at the station Saturday. The Progress MS-22 delivered almost 3 tons of food, water and fuel along with scientific equipment for the crew.

Roscosmos said that the loss of pressure in the Progress MS-21 didn’t affect the docking of the new cargo ship and “will have no impact on the future station program.”

The depressurization of the cargo craft follows an incident in December with the Soyuz crew capsule, which was hit by a tiny meteoroid that left a small hole in the exterior radiator and sent coolant spewing into space.

Russian cosmonauts Sergey Prokopyev and Dmitri Petelin, and NASA astronaut Frank Rubio were supposed to use the capsule to return to Earth in March, but Russian space officials decided that higher temperatures resulting from the coolant leak could make it dangerous to use.

They decided to launch a new Soyuz capsule February 20 so the crew would have a lifeboat in the event of an emergency. But since it will travel in automatic mode to expedite the launch, a replacement crew will now have to wait until late summer or fall when another capsule is ready. It means that Prokopyev, Petelin and Rubio will have to stay several extra months at the station, possibly pushing their mission to close to a year.

NASA took part in all the discussions and agreed with the plan.

Besides Prokopyev, Petelin and Rubio, the space station is home to NASA astronauts Nicole Mann and Josh Cassada, Russian Anna Kikina, and Japan’s Koichi Wakata. The four rode up on a SpaceX capsule last October.

Several US Universities to Experiment With Micro Nuclear Power 

If your image of nuclear power is giant, cylindrical concrete cooling towers pouring out steam on a site that takes up hundreds of acres of land, soon there will be an alternative: tiny nuclear reactors that produce only one-hundredth the electricity and can even be delivered on a truck.

Small but meaningful amounts of electricity — nearly enough to run a small campus, a hospital or a military complex, for example — will pulse from a new generation of micronuclear reactors. Now, some universities are taking interest.

“What we see is these advanced reactor technologies having a real future in decarbonizing the energy landscape in the U.S. and around the world,” said Caleb Brooks, a nuclear engineering professor at the University of Illinois at Urbana-Champaign.

The tiny reactors carry some of the same challenges as large-scale nuclear, such as how to dispose of radioactive waste and how to make sure they are secure. Supporters say those issues can be managed and the benefits outweigh any risks.

Universities are interested in the technology not just to power their buildings but to see how far it can go in replacing the coal and gas-fired energy that causes climate change. The University of Illinois hopes to advance the technology as part of a clean energy future, Brooks said. The school plans to apply for a construction permit for a high-temperature, gas-cooled reactor developed by the Ultra Safe Nuclear Corporation, and aims to start operating it by early 2028. Brooks is the project lead.

Microreactors will be “transformative” because they can be built in factories and hooked up on site in a plug-and-play way, said Jacopo Buongiorno, professor of nuclear science and engineering at the Massachusetts Institute of Technology. Buongiorno studies the role of nuclear energy in a clean energy world.

“That’s what we want to see, nuclear energy on demand as a product, not as a big mega project,” he said.

Both Buongiorno and Marc Nichol, senior director for new reactors at the Nuclear Energy Institute, view the interest by schools as the start of a trend.

Last year, Penn State University signed a memorandum of understanding with Westinghouse to collaborate on microreactor technology. Mike Shaqqo, the company’s senior vice president for advanced reactor programs, said universities are going to be “one of our key early adopters for this technology.”

Penn State wants to prove the technology so that Appalachian industries, such as steel and cement manufacturers, may be able to use it, said Professor Jean Paul Allain, head of the nuclear engineering department. Those two industries tend to burn dirty fuels and have very high emissions. Using a microreactor also could be one of several options to help the university use less natural gas and achieve its long-term carbon emissions goals, he said.

“I do feel that microreactors can be a game-changer and revolutionize the way we think about energy,” Allain said.

For Allain, microreactors can complement renewable energy by providing a large amount of power without taking up much land. A 10-megawatt microreactor could go on less than an acre, whereas windmills or a solar farm would need far more space to produce 10 megawatts, he added. The goal is to have one at Penn State by the end of the decade.

Purdue University in Indiana is working with Duke Energy on the feasibility of using advanced nuclear energy to meet its long-term energy needs.

Nuclear reactors that are used for research are nothing new on campus. About two dozen U.S. universities have them. But using them as an energy source is new.

Back at the University of Illinois, Brooks explains the microreactor would generate heat to make steam. While the excess heat from burning coal and gas to make electricity is often wasted, Brooks sees the steam production from the nuclear microreactor as a plus, because it’s a carbon-free way to deliver steam through the campus district heating system to radiators in buildings, a common heating method for large facilities in the Midwest and Northeast. The campus has hundreds of buildings.

The 10-megawatt microreactor wouldn’t meet all of the demand, but it would serve to demonstrate the technology, as other communities and campuses look to transition away from fossil fuels, Brooks said.

One company that is building microreactors that the public can get a look at today is Last Energy, based in Washington, D.C. It built a model reactor in Brookshire, Texas that’s housed in an edgy cube covered in reflective metal.

Now it’s taking that apart to test how to transport the unit. A caravan of trucks is taking it to Austin, where company founder Bret Kugelmass is scheduled to speak at the South by Southwest conference and festival.

Kugelmass, a technology entrepreneur and mechanical engineer, is talking with some universities, but his primary focus is on industrial customers. He’s working with licensing authorities in the United Kingdom, Poland and Romania to try to get his first reactor running in Europe in 2025.

The urgency of the climate crisis means zero-carbon nuclear energy must be scaled up soon, he said.

“It has to be a small, manufactured product as opposed to a large, bespoke construction project,” he said.

Traditional nuclear power costs billions of dollars. An example is two additional reactors at a plant in Georgia that will end up costing more than $30 billion.

The total cost of Last Energy’s microreactor, including module fabrication, assembly and site prep work, is under $100 million, the company says.

Westinghouse, which has been a mainstay of the nuclear industry for over 70 years, is developing its “eVinci” microreactor, Shaqqo said, and is aiming to get the technology licensed by 2027.

The Department of Defense is working on a microreactor too. Project Pele is a DOD prototype mobile nuclear reactor under design at the Idaho National Laboratory.

Abilene Christian University in Texas is leading a group of three other universities with the company Natura Resources to design and build a research microreactor cooled by molten salt to allow for high temperature operations at low pressure, in part to help train the next generation nuclear workforce.

But not everyone shares the enthusiasm. Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists, called it “completely unjustified.”

Microreactors in general will require much more uranium to be mined and enriched per unit of electricity generated than conventional reactors do, he said. He said he also expects fuel costs to be substantially higher and that more depleted uranium waste could be generated compared to conventional reactors.

“I think those who are hoping that microreactors are going to be the silver bullet for solving the climate change crisis are simply betting on the wrong horse,” he said.

Lyman also said he fears microreactors could be targeted for a terrorist attack, and some designs would use fuels that could be attractive to terrorists seeking to build crude nuclear weapons. The UCS does not oppose using nuclear power, but wants to make sure it’s safe.

The United States does not have a national storage facility for storing spent nuclear fuel and it’s piling up. Microreactors would only compound the problem and spread the radioactive waste around, Lyman said.

A 2022 Stanford-led study found that smaller modular reactors — the next size up from micro — will generate more waste than conventional reactors. Lead author Lindsay Krall said this week that the design of microreactors would make them subject to the same issue.

Kugelmass sees only promise. Nuclear, he said, has been “totally misunderstood and under leveraged.” It will be “the key pillar of our energy transformation moving forward.”

Australian Defense Department to Remove Chinese-Made Cameras

Australia’s Defense Department will remove surveillance cameras made by Chinese Communist Party-linked companies from its buildings, the government said Thursday after the U.S. and Britain made similar moves.

The Australian newspaper reported Thursday that at least 913 cameras, intercoms, electronic entry systems and video recorders developed and manufactured by Chinese companies Hikvision and Dahua are in Australian government and agency offices, including the Defense Department and the Department of Foreign Affairs and Trade.

Hikvision and Dahua are partly owned by China’s Communist Party-ruled government.

Australian Defense Minister Richard Marles said his department is assessing all its surveillance technology.

“Where those particular cameras are found, they’re going to be removed,” Marles told Australian Broadcasting Corp. “There is an issue here and we’re going to deal with it.”

Asked about Australia’s decision, Chinese Foreign Ministry spokesperson Mao Ning criticized what she called “wrongful practices that overstretch the concept of national security and abuse state power to suppress and discriminate against Chinese enterprises.”

Without mentioning Australia by name, Mao said the Chinese government has “always encouraged Chinese enterprises to carry out foreign investment and cooperation in accordance with market principles and international rules, and on the basis of compliance with local laws.”

“We hope Australia will provide a fair and non-discriminatory environment for the normal operation of Chinese enterprises and do more things that are conducive to mutual trust and cooperation between the two sides,” she told reporters at a daily briefing.

The U.S. government said in November it was banning telecommunications and video surveillance equipment from several prominent Chinese brands including Hikvision and Dahua in an effort to protect the nation’s communications network.

Security cameras made by Hikvision were also banned from British government buildings in November.

An audit in Australia found that Hikvision and Dahua cameras and security equipment were found in almost every department except the Agriculture Department and the Department of Prime Minister and Cabinet.

The Australian War Memorial and National Disability Insurance Agency have said they will remove the Chinese cameras found at their sites, the ABC reported.

Opposition cybersecurity spokesperson James Paterson said he had prompted the audit by asking questions over six months of each federal agency, after the Home Affairs Department was unable to say how many of the cameras, access control systems and intercoms were installed in government buildings.

“We urgently need a plan from the … government to rip every one of these devices out of Australian government departments and agencies,” Paterson said.

Both companies are subject to China’s National Intelligence Law which requires them to cooperate with Chinese intelligence agencies, he said.

“We would have no way of knowing if the sensitive information, images and audio collected by these devices are secretly being sent back to China against the interests of Australian citizens,” Paterson said.

US Students’ ‘Big Idea’ Could Help NASA Explore the Moon

Last November, Northeastern University student Andre Neto Caetano watched the live, late-night launch of NASA’s Artemis 1 from Kennedy Space Center in Florida on a cellphone placed on top of a piano in the lobby of the hotel where he was staying in California.

“I had, not a flashback, but a flash-forward of seeing maybe Artemis 4 or something, and COBRA, as part of the payload, and it is on the moon doing what it was meant to do,” Caetano told VOA during a recent Skype interview.

Artemis 1 launched the night before Caetano and his team of scholars presented their Crater Observing Bio-inspired Rolling Articulator (COBRA) rover project at NASA’s Breakthrough, Innovative, and Game Changing (BIG) Idea Challenge. The team hoped to impress judges assembled in the remote California desert.

“They were skeptical that the mobility solutions that we were proposing would actually work,” he said.

That skepticism, said Caetano, came from the simplicity of their design.

“It’s a robot that moves like a snake, and then the head and the tail connect, and then it rolls,” he said.

NASA’s BIG Idea Challenge prompted teams of college students to compete to develop solutions for the agency’s ambitious goals in the upcoming Artemis missions to the moon, which Caetano explains are “extreme lunar terrain mobility.”

Northeastern’s COBRA is designed to move through the fine dust, or regolith, of the lunar surface to probe the landscape for interesting features, including ice and water, hidden in the shadows of deep craters.

“They never could … deploy a robot or a ground vehicle that can sort of negotiate the environment and get to the bottom of these craters and look for ice water content,” said professor Alireza Ramezani, who advises the COBRA team and has worked with robotic designs that mimic the movements of real organisms, something Caetano said formed a baseline for their research.

“With him building a robot dog and robot bat, we knew we wanted to have some ‘bioinspiration’ in our project,” Caetano said.

Using biology as the driving force behind COBRA’s design was also something Ramezani hoped would win over judges in NASA’s competition.

“Our robot sort of tumbled 80 to 90 feet (24-27 meters) down this hill and that … impressed the judges,” he told VOA. “We did this with minimum energy consumption and within, like, 10 or 15 seconds.”

Caetano said COBRA weighs about 7 kilograms, “so the fact that COBRA is super light brings a benefit to it, as well.”

Ramezani added that COBRA is also cost-effective.

“If you want to have a space-worthy platform, it’s going to be in the order of $100,000 to $200,000. You can have many of these systems tumbling down these craters,” he said.

The Northeastern team’s successful COBRA test put to rest any lingering skepticism, sending them to the top of NASA’s 2022 BIG Idea competition and hopefully — in the not-too-distant future — to the top of NASA’s Space Launch System on its way to the moon.

“I’m not saying this, our judges said this. It’s potentially going to transform the way future space exploration systems look like,” said Ramezani. “They are even talking to some of our partners to see if we can increase technology readiness of the system, make it space worthy, and deploy it to the moon.”

Which is why, despite his impending graduation later this year, Caetano plans to continue developing COBRA alongside his teammates.

“Because we brought it to life together, the idea of just fully abandoning it at graduation probably doesn’t appeal to most of us,” Caetano said. “In some way or another, we still want to be involved in the project, in making sure that … we are still the ones who put it on the moon at some point.”

That could happen as soon as 2025, the year NASA hopes to return astronauts to the lunar surface in the Artemis program.

Australia to Review Chinese-Made Cameras in Defense Offices

The Australian government will examine surveillance technology used in offices of the defense department, Defense Minister Richard Marles said Thursday, amid reports the Chinese-made cameras installed there raised security risks.

The move comes after Britain in November asked its departments to stop installing Chinese-linked surveillance cameras at sensitive buildings. Some U.S. states have banned vendors and products from several Chinese technology companies.

“This is an issue and … we’re doing an assessment of all the technology for surveillance within the defense (department) and where those particular cameras are found, they are going to be removed,” Marles told ABC Radio in an interview.

Opposition lawmaker James Paterson said Thursday his own audit revealed almost 1,000 units of equipment by Hangzhou Hikvision Digital Technology and Dahua Technology, two partly state-owned Chinese firms, were installed across more than 250 Australian government offices.

Paterson, the shadow minister for cybersecurity and countering foreign interference, urged the government to urgently come up with a plan to remove all such cameras.

Marles said the issue was significant but “I don’t think we should overstate it.”

Australian media reported on Wednesday that the national war memorial in Canberra would remove several Chinese-made security cameras installed on the premises over concerns of spying.

Hikvision and Dahua Technology did not immediately respond to requests seeking comment.

Australia and China have been looking to mend diplomatic ties, which soured after Canberra in 2018 banned Huawei from its 5G broadband network. That cooled further after Australia called for an independent investigation into the origins of COVID-19.

China responded with tariffs on several Australian commodities.

Prime Minister Anthony Albanese said he was not concerned about how Beijing might react to the removal of cameras.

“We act in accordance with Australia’s national interest. We do so transparently and that’s what we will continue to do,” Albanese told reporters.

Ex-Twitter Execs Deny Pressure to Block Hunter Biden Story

Former Twitter executives conceded Wednesday they made a mistake by blocking a story about Hunter Biden, the son of U.S. President Joe Biden, from the social media platform in the run-up to the 2020 election, but adamantly denied Republican assertions they were pressured by Democrats and law enforcement to suppress the story.

“The decisions here aren’t straightforward, and hindsight is 20/20,” Yoel Roth, Twitter’s former head of trust and safety, testified to Congress. “It isn’t obvious what the right response is to a suspected, but not confirmed, cyberattack by another government on a presidential election.”

He added, “Twitter erred in this case because we wanted to avoid repeating the mistakes of 2016.”

The three former executives appeared before the House Oversight and Accountability Committee to testify for the first time about the company’s decision to initially block from Twitter a New York Post article in October 2020 about the contents of a laptop belonging to Hunter Biden.

Emboldened by Twitter’s new leadership in billionaire Elon Musk — whom they see as more sympathetic to conservatives than the company’s previous leadership — Republicans used the hearing to push a long-standing and unproven theory that social media companies including Twitter are biased against them.

Committee Chairman Representative James Comer said the hearing is the panel’s “first step in examining the coordination between the federal government and Big Tech to restrict protected speech and interfere in the democratic process.”

Alleged political bias

The hearing continues a yearslong trend of Republican leaders calling tech company leaders to testify about alleged political bias. Democrats, meanwhile, have pressed the companies on the spread of hate speech and misinformation on their platforms.

The witnesses Republicans subpoenaed were Roth, Vijaya Gadde, Twitter’s former chief legal officer, and James Baker, the company’s former deputy general counsel.

Democrats brought a witness of their own, Anika Collier Navaroli, a former employee with Twitter’s content moderation team. She testified last year to the House committee that investigated the January 6 Capitol riot about Twitter’s preferential treatment of Donald Trump until it banned the then-president from the site two years ago.

‘A bizarre political stunt’

The White House criticized congressional Republicans for staging “a bizarre political stunt,” hours after Biden’s State of the Union address where he detailed bipartisan progress in his first two years in office.

“This appears to be the latest effort by the House Republican majority’s most extreme MAGA members to question and relitigate the outcome of the 2020 election,” White House spokesperson Ian Sams said in a statement Wednesday. “This is not what the American people want their leaders to work on.”

The New York Post reported weeks before the 2020 presidential election that it had received from Trump’s personal lawyer, Rudy Giuliani, a copy of a hard drive from a laptop that Hunter Biden had dropped off 18 months earlier at a Delaware computer repair shop and never retrieved. Twitter blocked people from sharing links to the story for several days.

“You exercised an amazing amount of clout and power over the entire American electorate by even holding (this story) hostage for 24 hours and then reversing your policy,” Representative Andy Biggs said to the panel of witnesses.

Months later, Twitter’s then-CEO, Jack Dorsey, called the company’s communications around the Post article “not great.” He added that blocking the article’s URL with “zero context” around why it was blocked was “unacceptable.”

The newspaper story was greeted at the time with skepticism because of questions about the laptop’s origins, including Giuliani’s involvement, and because top officials in the Trump administration had already warned that Russia was working to denigrate Joe Biden before the White House election.

The Kremlin interfered in the 2016 race by hacking Democratic emails that were subsequently leaked, and fears that Russia would meddle again in the 2020 race were widespread across Washington.

Musk releases ‘Twitter files’

Just last week, lawyers for the younger Biden asked the U.S. Justice Department to investigate people who say they accessed his personal data. But they did not acknowledge that the data came from a laptop Hunter Biden is purported to have dropped off at a computer repair shop.

The issue was also reignited recently after Musk took over Twitter as CEO and began to release a slew of company information to independent journalists, what he has called the “Twitter Files.”

The documents and data largely show internal debates among employees over the decision to temporarily censor links to the Hunter Biden story. The tweet threads lacked substantial evidence of a targeted influence campaign from Democrats or the FBI, which has denied any involvement in Twitter’s decision-making.

Witness often targeted

One of Wednesday’s witnesses, Baker, has been a frequent target of Republican scrutiny.

Baker was the FBI’s general counsel during the opening of two of the bureau’s most consequential investigations in history: the Hillary Clinton investigation and a separate inquiry into potential coordination between Russia and Trump’s 2016 presidential campaign. Republicans have long criticized the FBI’s handling of both investigations.

Baker denied any wrongdoing during his two years at Twitter and said that despite disagreeing with the decision to block links to the Post story, “I believe that the public record reveals that my client acted in a manner that was fully consistent with the First Amendment.”

There has been no evidence that Twitter’s platform is biased against conservatives; studies have found the opposite when it comes to conservative media in particular. But the issue continues to preoccupy Republican members of Congress.

And some experts said questions around government influence on Big Tech’s content moderation are legitimate.

Ex-Twitter Executives to Testify About Hunter Biden Story Before House Panel

Former Twitter employees are expected to testify next week before the House Oversight Committee about the social media platform’s handling of reporting on President Joe Biden’s son, Hunter Biden.

The scheduled testimony, confirmed by the committee Monday, will be the first time the three former executives will appear before Congress to discuss the company’s decision to initially block from Twitter a New York Post article regarding Hunter Biden’s laptop in the weeks before the 2020 election.

Republicans have said the story was suppressed for political reasons, though no evidence has been released to support that claim. The witnesses for the February 8 hearing are expected to be Vijaya Gadde, former chief legal officer; James Baker, former deputy general counsel; and Yoel Roth, former head of safety and integrity.

The hearing is among the first of many in a GOP-controlled House to be focused on Biden and his family, as Republicans wield the power of their new, albeit slim, majority.

The New York Post first reported in October 2020 that it had received from former President Donald Trump’s personal attorney, Rudy Giuliani, a copy of a hard drive of a laptop that Hunter Biden had dropped off 18 months earlier at a Delaware computer repair shop and never retrieved. Twitter initially blocked people from sharing links to the story for several days.

Months later, Twitter’s then-CEO Jack Dorsey called the company’s communications around the Post article “not great.” He added that blocking the article’s URL with “zero context” around why it was blocked was “unacceptable.”

The Post article at the time was greeted with skepticism due to questions about the laptop’s origins, including Giuliani’s involvement, and because top officials in the Trump administration already had warned that Russia was working to denigrate Joe Biden ahead of the 2020 election.

The Kremlin had interfered in the 2016 race by hacking Democratic emails that were subsequently leaked, and there were widespread fears across Washington that Russia would meddle again in the 2020 race.

“This is why we’re investigating the Biden family for influence peddling,” Rep. James Comer, chairman of the Oversight committee, said at a press event Monday morning. “We want to make sure that our national security is not compromised.”

The White House has sought to discredit the Republican probes into Hunter Biden, calling them “divorced-from-reality political stunts.”

Nonetheless, Republicans now hold subpoena power in the House, giving them the authority to compel testimony and conduct an aggressive investigation. GOP staff has spent the past year analyzing messages and financial transactions found on the laptop that belonged to the president’s younger son. Comer has previously said the evidence they have compiled is “overwhelming,” but did not offer specifics.

Comer has pledged there won’t be hearings regarding the Biden family until the committee has the evidence to back up any claims of alleged wrongdoing. He also acknowledged the stakes are high whenever an investigation centers on the leader of a political party.

On Monday, the Kentucky Republican, speaking at a National Press Club event, said that he could not guarantee a subpoena of Hunter Biden during his term. “We’re going to go where the investigation leads us. Maybe there’s nothing there.”

Comer added, “We’ll see.” 

Microsoft bakes ChatGPT-Like Tech into Search Engine Bing

Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence.

The revamping of Microsoft’s second-place search engine could give the software giant a head start against other tech companies in capitalizing on the worldwide excitement surrounding ChatGPT, a tool that’s awakened millions of people to the possibilities of the latest AI technology.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. Microsoft announced the new technology at an event Tuesday at its headquarters in Redmond, Washington.

Microsoft said a public preview of the new Bing was to launch Tuesday for users who sign up for it, but the technology will scale to millions of users in coming weeks.

Yusuf Mehdi, corporate vice president and consumer chief marketing officer, said the new Bing will go live for desktop on limited preview. Everyone can try a limited number of queries, he said.

The strengthening partnership with ChatGPT-maker OpenAI has been years in the making, starting with a $1 billion investment from Microsoft in 2019 that led to the development of a powerful supercomputer specifically built to train the San Francisco startup’s AI models.

While it’s not always factual or logical, ChatGPT’s mastery of language and grammar comes from having ingested a huge trove of digitized books, Wikipedia entries, instruction manuals, newspapers and other online writings.

The shift to making search engines more conversational — able to confidently answer questions rather than offering links to other websites — could change the advertising-fueled search business, but also poses risks if the AI systems don’t get their facts right.

Their opaqueness also makes it hard to source back to the original human-made images and texts they’ve effectively memorized.

Google has been cautious about such moves. But in response to pressure over ChatGPT’s popularity, Google CEO Sundar Pichai on Monday announced a new conversational service named Bard that will be available exclusively to a group of “trusted testers” before being widely released later this year.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Other tech rivals such as Facebook parent Meta and Amazon also worked on similar technology, but Microsoft’s latest moves aim to position it at he center of the ChatGPT zeitgeist.

Microsoft disclosed in January that it was pouring billions more dollars into OpenAI as it looks to fuse the technology behind ChatGPT, the image-generator DALL-E and other OpenAI innovations into an array of Microsoft products tied to its cloud computing platform and its Office suite of workplace products like email and spreadsheets.

The most surprising might be the integration with Bing, which is the second-place search engine in many markets but has never come close to challenging Google’s dominant position.

Bing launched in 2009 as a rebranding of Microsoft’s earlier search engines and was run for a time by Nadella, years before he took over as CEO. Its significance was boosted when Yahoo and Microsoft signed a deal for Bing to power Yahoo’s search engine, giving Microsoft access to Yahoo’s greater search share. Similar deals infused Bing into the search features for devices made by other companies, though users wouldn’t necessarily know that Microsoft was powering their searches.

By making it a destination for ChatGPT-like conversations, Microsoft could invite more users to give Bing a try.

On the surface, at least, a Bing integration seems far different from what OpenAI has in mind for its technology.

OpenAI has long voiced an ambitious vision for safely guiding what’s known as AGI, or artificial general intelligence, a not-yet-realized concept that harkens back to ideas from science fiction about human-like machines. OpenAI’s website describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI started out as a nonprofit research laboratory when it launched in December 2015 with backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT model for generating human-like paragraphs of readable text.

OpenAI’s other products include the image-generator DALL-E, first released in 2021, the computer programming assistant Codex and the speech recognition tool Whisper.

Technology Brings Hope to Ukraine’s Wounded

The war in Ukraine has left thousands of wounded soldiers, many of whom require the latest technologies to heal and return to normal life. For VOA, Anna Chernikova visited a rehabilitation center near Kyiv, where cutting edge technology and holistic care are giving soldiers hope. (Myroslava Gongadze contributed to this report. Camera: Eugene Shynkar )       

Seeing Is Believing? Global Scramble to Tackle Deepfakes

Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions — the scramble is on to rein in AI deepfakes that have become a misinformation super spreader.

Artificial Intelligence is redefining the proverb “seeing is believing,” with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.

China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere — it was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”

The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.

“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the COVID-19 vaccine was behind his on-field collapse baselessly labeled his video a deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”

In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”

The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparency? That is the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”

“The results confirm fears … about how the tool can be weaponized in the wrong hands,” NewsGuard said.

Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Easy Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe

Boeing Bids Farewell to an Icon, Delivers Last 747 Jumbo Jet

Boeing bid farewell to an icon on Tuesday, delivering its final 747 jumbo jet as thousands of workers who helped build the planes over the past 55 years looked on. 

Since its first flight in 1969, the giant yet graceful 747 has served as a cargo plane, a commercial aircraft capable of carrying nearly 500 passengers, a transport for NASA’s space shuttles, and the Air Force One presidential aircraft. It revolutionized travel, connecting international cities that had never before had direct routes and helping democratize passenger flight. 

But over about the past 15 years, Boeing and its European rival Airbus have introduced more profitable and fuel efficient wide-body planes, with only two engines to maintain instead of the 747’s four. The final plane is the 1,574th built by Boeing in the Puget Sound region of Washington state. 

Thousands of workers joined Boeing and other industry executives from around the world — as well as actor and pilot John Travolta, who has flown 747s — Tuesday for a ceremony in the company’s massive factory north of Seattle, marking the delivery of the last one to cargo carrier Atlas Air. 

“If you love this business, you’ve been dreading this moment,” said longtime aviation analyst Richard Aboulafia. “Nobody wants a four-engine airliner anymore, but that doesn’t erase the tremendous contribution the aircraft made to the development of the industry or its remarkable legacy.” 

Boeing set out to build the 747 after losing a contract for a huge military transport, the C-5A. The idea was to take advantage of the new engines developed for the transport — high-bypass turbofan engines, which burned less fuel by passing air around the engine core, enabling a farther flight range — and to use them for a newly imagined civilian aircraft. 

It took more than 50,000 Boeing workers less than 16 months to churn out the first 747 — a Herculean effort that earned them the nickname “The Incredibles.” The jumbo jet’s production required the construction of a massive factory in Everett, north of Seattle — the world’s largest building by volume. The factory wasn’t even completed when the first planes were finished. 

Among those in attendance was Desi Evans, 92, who joined Boeing at its factory in Renton, south of Seattle, in 1957 and went on to spend 38 years at the company before retiring. One day in 1967, his boss told him he’d be joining the 747 program in Everett — the next morning. 

“They told me, ‘Wear rubber boots, a hard hat and dress warm, because it’s a sea of mud,'” Evans recalled. “And it was — they were getting ready for the erection of the factory.” 

He was assigned as a supervisor to help figure out how the interior of the passenger cabin would be installed and later oversaw crews that worked on sealing and painting the planes. 

“When that very first 747 rolled out, it was an incredible time,” he said as he stood before the last plane, parked outside the factory. “You felt elated — like you’re making history. You’re part of something big, and it’s still big, even if this is the last one.” 

The plane’s fuselage was 225 feet (68.5 meters) long and the tail stood as tall as a six-story building. The plane’s design included a second deck extending from the cockpit back over the first third of the plane, giving it a distinctive hump and inspiring a nickname, the Whale. More romantically, the 747 became known as the Queen of the Skies. 

Some airlines turned the second deck into a first-class cocktail lounge, while even the lower deck sometimes featured lounges or even a piano bar. One decommissioned 747, originally built for Singapore Airlines in 1976, has been converted into a 33-room hotel near the airport in Stockholm. 

“It was the first big carrier, the first widebody, so it set a new standard for airlines to figure out what to do with it, and how to fill it,” said Guillaume de Syon, a history professor at Pennsylvania’s Albright College who specializes in aviation and mobility. “It became the essence of mass air travel: You couldn’t fill it with people paying full price, so you need to lower prices to get people onboard. It contributed to what happened in the late 1970s with the deregulation of air travel.” 

The first 747 entered service in 1970 on Pan Am’s New York-London route, and its timing was terrible, Aboulafia said. It debuted shortly before the oil crisis of 1973, amid a recession that saw Boeing’s employment fall from 100,800 employees in 1967 to a low of 38,690 in April 1971. The “Boeing bust” was infamously marked by a billboard near the Seattle-Tacoma International Airport that read, “Will the last person leaving SEATTLE — Turn out the lights.” 

An updated model — the 747-400 series — arrived in the late 1980s and had much better timing, coinciding with the Asian economic boom of the early 1990s, Aboulafia said. He took a Cathay Pacific 747 from Los Angeles to Hong Kong as a twentysomething backpacker in 1991. 

“Even people like me could go see Asia,” Aboulafia said. “Before, you had to stop for fuel in Alaska or Hawaii and it cost a lot more. This was a straight shot — and reasonably priced.” 

Delta was the last U.S. airline to use the 747 for passenger flights, which ended in 2017, although some other international carriers continue to fly it, including the German airline Lufthansa. 

Lufthansa CEO Carsten Spohr recalled traveling in a 747 as a young exchange student and said that when he realized he’d be traveling to the West Coast of the U.S. for Tuesday’s event, there was only one way to go: riding first-class in the nose of a Lufthansa 747 from Frankfurt to San Francisco. He promised the crowd Lufthansa would keep flying the 747 for many years to come. 

“We just love the airplane,” he said. 

Atlas Air ordered four 747-8 freighters early last year, with the final one — emblazoned with an image of Joe Sutter, the engineer who oversaw the 747’s original design team — delivered Tuesday. Atlas CEO John Dietrich called the 747 the greatest air freighter, thanks in part to its unique capacity to load through the nose cone. 

Cheaters Beware: ChatGPT Maker Releases AI Detection Tool 

The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.

The new AI Text Classifier launched Tuesday by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.

OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.

“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.

Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched November 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.

By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.

The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.

“We can’t afford to ignore it,” Robinson said.

The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.

School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.

“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,'” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.

“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.

OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.

The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text — a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” — and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.

But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.

“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”

“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another.”

Huawei Latest Target of US Crackdown on China Tech

China says it is “deeply concerned” over reports that the United States is moving to further restrict sales of American technology to Huawei, a tech company that U.S. officials have long singled out as a threat to national security for its alleged support of Beijing’s espionage efforts.

As first reported by the Financial Times, the U.S. Department of Commerce has informed American firms that it will no longer issue licenses for technology exports to Huawei, thereby isolating the Shenzen-based company from supplies it needs to make its products.

The White House and Commerce Department have not responded to VOA’s request for confirmation of the reports. But observers say the move may be the latest tactic in the Biden administration’s geoeconomics strategy as it comes under increasing Republican pressure to outcompete China. 

The crackdown on Chinese companies began under the Trump administration, which in 2019 added Huawei to an export blacklist but made exceptions for some American firms, including Qualcomm and Intel, to provide non-5G technology licenses.

Since taking office in 2021, President Joe Biden has taken an even more aggressive stance than his predecessor, Donald Trump. Now the Biden administration appears to be heading toward a total ban on all tech exports to Huawei, said Sam Howell, who researches quantum information science at the Center for a New American Security’s Technology and National Security program.

“These new restrictions from what we understand so far would include items below the 5G level,” she told VOA. “So 4G items, Wi-Fi 6 and [Wi-Fi] 7, artificial intelligence, high performance computing and cloud capabilities as well.”

Should the Commerce Department follow through with the ban, there will likely be pushback from U.S. companies whose revenues will be directly affected, Howell said. Currently Intel and Qualcomm still sell chips used in laptops and phones manufactured by Huawei.

Huawei and Beijing have denied that they are a threat to other countries’ national security. Foreign ministry spokesperson Mao Ning accused Washington of “overstretching the concept of national security and abusing state power” to suppress Chinese competitors.

“Such practices are contrary to the principles of market economy” and are “blatant technological hegemony,” Mao said. 

Outcompeting Chinese tech

The latest U.S. move on Huawei is part of a U.S. effort to outcompete China in the cutting-edge technology sector.

In October, Biden imposed sweeping restrictions on providing advanced semiconductors and chipmaking equipment to Chinese companies, seeking to maintain dominance particularly on the most advanced chips. His administration is rallying allies behind the effort, including the Netherlands, Japan, South Korea and Taiwan – home to leading companies that play key roles in the industry’s supply chain.

U.S. officials say export restrictions on chips are necessary because China can use semiconductors to advance their military systems, including weapons of mass destruction, and commit human rights abuses. 

The October restrictions follow the CHIPS and Science Act of 2022, which Biden signed into law in August and that restricts companies receiving U.S. subsidies from investing in and expanding cutting-edge chipmaking facilities in China. It also provides $52 billion to strengthen the domestic semiconductor industry.

Beijing has invested heavily in its own semiconductor sector, with plans to invest $1.4 trillion in advanced technologies in a bid to achieve 70% self-sufficiency in semiconductors by 2025. 

TikTok a target

TikTok, a social media application owned by the Chinese company ByteDance that has built a massive following especially among American youth, is also under U.S. lawmakers’ scrutiny due to suspicion that it could be used as a tool of Chinese foreign espionage or influence.

CEO Shou Zi Chew is scheduled to appear before the House Energy and Commerce Committee on March 23 to testify about TikTok’s “consumer privacy and data security practices, the platforms’ impact on kids, and their relationship with the Chinese Communist Party.”

Lawmakers are divided on whether to ban or allow the popular app, which has been downloaded onto about 100 million U.S. smartphones, or force its sale to an American buyer.

Earlier in January, Congress set up the House Select Committee on China, tasked with dealing with legislation to combat the dangers of a rising China.

As Children in US Study Online, Apps Watch Their Every Move 

For New York teacher Michael Flanagan, the pandemic was a crash course in new technology — rushing out laptops to stay-at-home students and shifting hectic school life online.

Students are long back at school, but the technology has lived on, and with it has come a new generation of apps that monitor the pupils online, sometimes round the clock and even on down days shared with family and friends at home.

The programs scan students’ online activity, social media posts and more — aiming to keep them focused, detect mental health problems and flag up any potential for violence.

“You can’t unring the bell,” said Flanagan, who teaches social studies and economics. “Everybody has a device.”

The new trend for tracking, however, has raised fears that some of the apps may target minority pupils, while others have outed LGBT+ students without their consent, and many are used to instill discipline as much as deliver care.

So Flanagan has parted ways with many of his colleagues and won’t use such apps to monitor his students online.

He recalled seeing a demo of one such program, GoGuardian, in which a teacher showed — in real time — what one student was doing on his computer. The child was at home, on a day off.

Such scrutiny raised a big red flag for Flanagan.

“I have a school-issued device, and I know that there’s no expectation of privacy. But I’m a grown man — these kids don’t know that,” he said.

A New York City Department of Education spokesperson said that the use of GoGuardian Teacher “is only for teachers to see what’s on the student’s screen in the moment, provide refocusing prompts, and limit access to inappropriate content.”

Valued at more than $1 billion, GoGuardian — one of a handful of high-profile apps in the market — is now monitoring more than 22 million students, including in the New York City, Chicago and Los Angeles public systems.

Globally, the education technology sector is expected to grow by $133 billion from 2021 to 2026, market researcher Technavio said last year.

Parents expect schools to keep children safe in classrooms or on field trips, and schools also “have a responsibility to keep students safe in digital spaces and on school-issued devices,” GoGuardian said in a statement.

The company says it “provides educators with the ability to protect students from harmful or explicit content”.

Nowadays, online monitoring “is just part of the school environment,” said Jamie Gorosh, policy counsel with the Future of Privacy Forum, a watchdog group.

And even as schools move beyond the pandemic, “it doesn’t look like we’re going back,” she said.

Guns and depression

A key priority for monitoring is to keep students engaged in their academic work, but it also taps into fast-rising concerns over school violence and children’s mental health, which medical groups in 2021 termed a national emergency.

According to federal data released this month, 82% of schools now train staff on how to spot mental health problems, up from 60% in 2018; 65% have confidential threat-reporting systems, up 15% in the same period.

In a survey last year by the nonprofit Center for Democracy and Technology (CDT), 89% of teachers reported their schools were monitoring student online activity.

Yet it is not clear that the software creates safer schools.

Gorosh cited May’s shooting in Uvalde, Texas, that left 21 dead in a school that had invested heavily in monitoring tech.

Some worry the tracking apps could actively cause harm.

The CDT report, for instance, found that while administrators overwhelmingly say the purpose of monitoring software is student safety, “it’s being used far more commonly for disciplinary purposes … and we’re seeing a discrepancy falling along racial lines,” said Elizabeth Laird, director of CDT’s Equity in Civic Technology program.

The programs’ use of artificial intelligence to scan for keywords has also outed LGBT+ students without their consent, she said, noting that 29% of students who identify as LGBT+ said they or someone they knew had experienced this.

And more than a third of teachers said their schools send alerts automatically to law enforcement outside school hours.

“The stated purpose is to keep students safe, and here we have set up a system that is routinizing law enforcement access to this information and finding reasons for them to go into students’ homes,” Laird said.

‘Preyed upon’

A report by federal lawmakers last year into four companies making student monitoring software found that none had made efforts to see if the programs disproportionately targeted marginalized students.

“Students should not be surveilled on the same platforms they use for their schooling,” Senator Ed Markey of Massachusetts, one of the report’s co-authors, told the Thomson Reuters Foundation in a statement.

“As school districts work to incorporate technology in the classroom, we must ensure children and teenagers are not preyed upon by a web of targeted advertising or intrusive monitoring of any kind.”

The Department of Education has committed to releasing guidelines around the use of AI early this year.

A spokesperson said the agency was “committed to protecting the civil rights of all students.”

Aside from the ethical questions around spying on children, many parents are frustrated by the lack of transparency.

“We need more clarity on whether data is being collected, especially sensitive data. You should have at least notification, and probably consent,” said Cassie Creswell, head of Illinois Families for Public Schools, an advocacy group.

Creswell, who has a daughter in a Chicago public school, said several parents have been sent alerts about their children’s online searches, despite not having been asked or told about the monitoring in the first place.

Another child had faced repeated warnings not to play a particular game — even though the student was playing it at home on the family computer, she said.

Creswell and others acknowledge that the issues monitoring aims to address — bullying, depression, violence — are real and need tackling, but question whether technology is the answer.

“If we’re talking about self-harm monitoring, is this the best way to approach the issue?” said Gorosh.

Pointing to evidence suggesting AI is imperfect in capturing the warning signs, she said increased funding for school counselors could be more narrowly tailored to the problem.

“There are huge concerns,” she said. “But maybe technology isn’t the first step to answer some of those issues.”

US, EU Launch Agreement on Artificial Intelligence

The United States and European Union announced Friday an agreement to speed up and enhance the use of artificial intelligence to improve agriculture, health care, emergency response, climate forecasting and the electric grid. 

A senior U.S. administration official, discussing the initiative shortly before the official announcement, called it the first sweeping AI agreement between the United States and Europe. Previously, agreements on the issue had been limited to specific areas such as enhancing privacy, the official said.  

AI modeling, which refers to machine-learning algorithms that use data to make logical decisions, could be used to improve the speed and efficiency of government operations and services.  

“The magic here is in building joint models [while] leaving data where it is,” the senior administration official said. “The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data, because the more data and the more diverse data, the better the model.” 

The initiative will give governments greater access to more detailed and data-rich AI models, leading to more efficient emergency responses and electric grid management, and other benefits, the administration official said. 

Pointing to the electric grid, the official said the United States collects data on how electricity is being used, where it is generated, and how to balance the grid’s load so that weather changes do not knock it offline. 

Many European countries have similar data points they gather relating to their own grids, the official said. Under the new partnership, all that data would be harnessed into a common AI model that would produce better results for emergency managers, grid operators and others relying on AI to improve systems.  

The partnership is currently between the White House and the European Commission, the executive arm of the 27-member European Union. The senior administration official said other countries would be invited to join in the coming months.