Category Archives: Business

Economy and business news. Business is the practice of making one’s living or making money by producing or buying and selling products (such as goods and services). It is also “any activity or enterprise entered into for profit.” A business entity is not necessarily separate from the owner and the creditors can hold the owner liable for debts the business has acquired

Angry Bing Chatbot Just Mimicking Humans, Experts Say

When Microsoft’s nascent Bing chatbot turns testy or even threatening, it’s likely because it essentially mimics what it learned from online conversations, analysts and academics said.

Tales of disturbing exchanges with the artificial intelligence chatbot, including it issuing threats and speaking of desires to steal nuclear code, create a deadly virus, or to be alive, have gone viral this week.

“I think this is basically mimicking conversations that it’s seen online,” Graham Neubig, an associate professor at Carnegie Mellon University’s language technologies institute, said Friday.

A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.

However, humans taking part in banter with programs naturally tend to read emotion and intent into what a chatbot says. 

“Large language models have no concept of ‘truth,’ they just know how to best complete a sentence in a way that’s statistically probable based on their inputs and training set,” programmer Simon Willison said in a blog post. “So they make things up, and then state them with extreme confidence.”

Laurent Daudet, co-founder of French AI company LightOn, said that the chatbot seemingly gone rogue was trained on exchanges that themselves turned aggressive or inconsistent.

“Addressing this requires a lot of effort and a lot of human feedback, which is also the reason why we chose to restrict ourselves for now to business uses and not more conversational ones,” Daudet told AFP.

The Bing chatbot was designed by Microsoft and the startup OpenAI, which has been causing a sensation since the November launch of ChatGPT, the headline-grabbing app capable of generating all sorts of written content in seconds on a simple request.

Since ChatGPT debuted, the technology behind it, known as generative AI, has been stirring fascination and concern.

“The model at times tries to respond or reflect in the tone in which it is being asked to provide responses (and) that can lead to a style we didn’t intend,” Microsoft said in a blog post, noting the bot is a work in progress.

The Bing chatbot said in some shared exchanges that it had been codenamed Sydney during development, and that it was given rules of behavior.

Those rules include “Sydney’s responses should also be positive, interesting, entertaining and engaging,” according to online posts.

Disturbing dialogues that combine steely threats and professions of love could be the result of dueling directives to stay positive while mimicking what the AI mined from human exchanges, Willison said.

Chatbots seem to be more prone to disturbing or bizarre responses during lengthy conversations, losing a sense of where exchanges are going, eMarketer principal analyst Yoram Wurmser told AFP.

“They can really go off the rails,” Wurmser said.

Microsoft announced on Friday it had capped the amount of back-and-forth people can have with its chatbot over a given question, because “very long chat sessions can confuse the underlying chat model in the new Bing.”

Spy Balloon Lifts Veil on China’s ‘Near Space’ Military Program

The little-noticed program that led to a Chinese spy balloon drifting across the United States this month has been discussed in China’s state-controlled media for more than a decade in articles extolling its potential military applications.

The reports, dating back to at least 2011, focus on how best to exploit what is known as “near space” – that portion of the atmosphere that is too high for traditional aircraft to fly but too low for a satellite to remain in orbit. Those early articles may offer clues to the capabilities of the balloon shot down by a U.S. jet fighter on Feb. 4.

“In recent years, ‘near space’ has been discussed often in foreign media, with many military commentators pointing out that this is a special sphere that had been neglected by militaries but now has risen to hotspot status,” reads a July 5, 2011, article in the People’s Liberation Army Daily titled Near Space – A Strategic Asset That Ought Not to be Neglected.

The article quoted Zhang Dongjiang, a senior researcher at the Chinese Academy of Military Sciences, discussing the potential applications of flying objects designed for near space.

“This is an area sitting in between ‘air’ and ‘space’ where neither the theory of gravity nor Kepler’s Law is independently applicable, thus limiting the freedom of flight for both aircraft that are designed based on the theory of gravity and spacecraft that follow Kepler’s Law,” Zhang was quoted as saying.

He noted that near space lacks the atmospheric disturbances of aeronautical altitudes, such as turbulence, thunder and lightning, yet is cheaper and easier to reach than the altitudes where satellites can remain in orbit.

“At the same time,” he added, near space is “much higher than ‘sky,’ hence holding superb prospects and potential for intelligence collection, reconnaissance and surveillance, securing communication, as well as air and ground warfare.”

Zhang suggested that near space can be exploited with “high-dynamic” craft that travel faster than the speed of sound, such as hypersonic cruise vehicles and sub-orbital vehicles, which “can arrive at target with high speed, attack with both high speed and precision, [and] can be deployed repeatedly.”

But he said near space also can provide an environment for slower vehicles, which he called “low-dynamic” craft, such as stratospheric airships, high-altitude balloons and solar-powered unmanned vehicles.

These, he said, “are capable of carrying payloads capable of capturing light, infrared rays, multispectral, hyper-spectral, radar, and other info, which can then be used to increase battlefield sensory and knowledge capability, support military operations.”

They also “can carry various payloads aimed at electronic counter-battle, fulfilling the aim of electronic magnetic suppression and electronic magnetic attack on the battlefield, damage and destroy an adversary’s information systems.”

Four years after the PLA Daily article, images were published in the military pages of Global Times, a state-controlled outlet, of two small-scale stratospheric vehicles identified as KF13 and KF16.

The vehicles were developed by the Opto-Electronics Engineering Institute of Beijing Aeronautics and Aerospace University, China’s main aeronautical and aerospace research university, according to an explanatory note published alongside the model shown in the Global Times. The institution is now known as Beihang [Beijing-Aero] University.

The explanatory note said a key feature of the vehicles was their unmanned and remote-control dual capability. Work was being done in Beijing and Shanghai, as well as in Shanxi province, on seeing the vehicles evolve from concept to production, according to the October 2015 article.

Other images of near space objects that surfaced the same month featured variously shaped aircraft whose features and functions included high-functioning surface materials, emergency control mechanisms, precise flight control technology, high-efficiency propeller technology, high-efficiency solar technology and ground operation integration technology.

An image of a blimp-like near space flying object called the Yuan Meng, literally “fulfilling dream,” was also posted to the internet in October 2015. It was described as having a flying altitude of 20-24 kilometers, a flight duration of six months and a payload of 100-300 kilograms.

Rick Fisher, a senior fellow at the International Assessment and Strategy Center in Washington, told VOA that China’s interest in the exploitation of near space actually began long before the PLA Daily article.

“Since the late 1990s, the PLA has been devoting resources for research and development for preparing for combat in ‘near space,’ the zone just below Low Earth Orbit (LEO) that is less expensive to reach than LEO [itself], and offers stealth advantages, especially for hypersonic platforms,” he said in an exchange of emails.

In addition to round balloons such as the one shot down by U.S. aircraft on Feb. 4, he said, “the PLA is also developing much larger blimp or airship stratospheric balloons that have solar powered engines driving large propellers that enable greater maneuverability.”

Fisher said Chinese state-owned conglomerates such as China Aerospace Science and Industry Corporation (CASIC) “have full-fledged near space programs like their Tengyun to produce very high-altitude UAV and hypersonic vehicles” for the purpose of waging combat in near space.

Tengyun literally means “riding above clouds.”

In September 2016, Chinese official media reported that Project Tengyun, initiated by CASIC, was expected to be ready for a test flight in 2030. The so-called “air-spacecraft” is designed to serve as a “new-generation, repeat-use roundtrip flying object between air and space,” a deputy general manager of CASIC told the 2nd Commercial Aeronautical Summit Forum held in Wuhan that month.

Another four projects proposed by CASIC also bore the concept of “cloud” in their names: Feiyun, meaning “flying cloud,” focuses on communication relay; Xingyun, meaning “cloud on the move,” would enable users to send text or audio messages even “at the end of the earth or edge of the sky”; Hongyun, meaning rainbow cloud, would be able to launch 156 satellites in its first stage; and Kuaiyun, meaning “fast cloud,” would be tasked with formulating a near space spheric network.

While China’s openness about its near space ambitions may be debatable, the speed with which it has made advances in related R&D appears to be indisputable.

“Throughout my career that was focused on the PLA, I do not recall anything about the PLA having a balloon program, let alone to have balloons operating over U.S. territory,” U.S. Navy Captain (retired) James Fanell, who retired as [a former] director of intelligence for the U.S. Pacific Fleet in 2015, told VOA in a written interview.

U.S. official now say they are aware of at least 40 incidents, however, in which Chinese surveillance balloons have passed over countries on as many as five continents. Those presumably included an incident last December in which a high-altitude airship was photographed near the northern Philippine Island of Luzon bordering the South China Sea.

“The object would look to be a teardrop-shaped airship with four tail fins. It’s not entirely clear from the images whether it might have a translucent exterior or a metallic-like one,” wrote Joseph Trevithick, deputy editor of The War Zone, a specialized website dedicated to developments in military technology and international security.

“Overall, the apparent airship’s general shape has broad similarities to a number of high-altitude, long-endurance types that Chinese companies are known to have been working on,” he wrote, including “at least two uncrewed solar-powered designs, the Tian Hang and Yuan Meng, with external propulsion and other systems intended primarily for operations at stratospheric altitudes, both of which have reportedly been test flown at least once.”

Fisher said the United States would be well advised to emulate China in enhancing its capabilities in near space.

The American aerospace company Lockheed Martin “tested a technology demonstrator in 2011 [but] there has been no further development of operational stratospheric airships for the U.S.” since then, Fisher said.

“The PLA is correct to invest in stratosphere balloons and airships; the U.S. must do more to develop these assets as well.”

Tesla Recalls ‘Full Self-Driving’ to Fix Unsafe Actions

U.S. safety regulators have pressured Tesla into recalling nearly 363,000 vehicles with its “Full Self-Driving” system because it misbehaves around intersections and doesn’t always follow speed limits.

The recall, part of a larger investigation by the National Highway Traffic Safety Administration into Tesla’s automated driving systems, is the most serious action taken yet against the electric vehicle maker.

It raises questions about CEO Elon Musk’s claims that he can prove to regulators that cars equipped with “Full Self-Driving” are safer than humans, and that humans almost never have to touch the controls.

Musk at one point had promised that a fleet of autonomous robotaxis would be in use in 2020. The latest action appears to push that development further into the future.

The safety agency says in documents posted on its website Thursday that Tesla will fix the concerns with an online software update in the coming weeks. The documents say Tesla is recalling the cars but does not agree with an agency analysis of the problem.

The system, which is being tested on public roads by as many as 400,000 Tesla owners, makes such unsafe actions as traveling straight through an intersection while in a turn-only lane, failing to come to a complete stop at stop signs, or going through an intersection during a yellow traffic light without proper caution, NHTSA said.

In addition, the system may not adequately respond to changes in posted speed limits, or it may not account for the driver’s adjustments in speed, the documents said.

“FSD beta software that allows a vehicle to exceed speed limits or travel through intersections in an unlawful or unpredictable manner increases the risk of a crash,” the agency said in documents.

Musk complained Thursday on Twitter, which he now owns, that calling an over-the-air software update a recall is “anachronistic and just flat wrong!” A message was left Thursday seeking further comment from Tesla, which has disbanded its media relations department.

Tesla has received 18 warranty claims that could be caused by the software from May 2019 through Sept. 12, 2022, the documents said. But the Austin, Texas, electric vehicle maker told the agency it is not aware of any deaths or injuries.

In a statement, NHTSA said it found the problems during tests performed as part of an investigation into Tesla’s “Full Self-Driving” and “Autopilot” software that take on some driving tasks. The investigation remains open, and the recall doesn’t address the full scope of what NHTSA is scrutinizing, the agency said.

Despite the names “Full Self-Driving” and “Autopilot,” Tesla says on its website that the cars cannot drive themselves and owners must be ready to intervene at all times.

The recall announced Thursday covers certain 2016-23 Model S and Model X vehicles, as well as 2017 through 2013 Model 3s, and 2020 through 2023 Model Y vehicles equipped with the software, or with installation pending.

US ‘Disruptive Technology’ Strike Force to Target National Security Threats

A top U.S. law enforcement official on Thursday unveiled a new “disruptive technology strike force” tasked with safeguarding American technology from foreign adversaries and other national security threats.

Deputy Attorney General Lisa Monaco, the No. 2 U.S. Justice Department official, made the announcement at a speech in London at Chatham House. The initiative, Monaco said, will be a joint effort between her department and the U.S. Commerce Department, with a goal of blocking adversaries from “trying to siphon our best technology.”

Monaco also addressed concerns about Chinese-owned video sharing app TikTok.

The U.S. government’s Committee on Foreign Investment in the United States, a powerful national security body, in 2020 ordered Chinese company ByteDance to divest TikTok because of fears that user data could be passed on to China’s government. The divestment has not taken place.

The committee and TikTok have been in talks for more than two years aiming to reach a national security agreement.

“I will note I don’t use TikTok, and I would not advise anybody to do so because of these concerns. The bottom line is China has been quite clear that they are trying to mold and put forward the use and norms around technologies that advance their privileges, their interests,” Monaco said.

The Justice Department in recent years has increasingly focused its efforts on bringing criminal cases to protect corporate intellectual property, U.S. supply chains and private data about Americans from foreign adversaries, either through cyberattacks, theft or sanctions evasion.

U.S. law enforcement officials have said that China by far remains the biggest threat to America’s technological innovation and economic security, a view that Monaco reiterated on Thursday.

“China’s doctrine of ‘civil-military fusion’ means that any advance by a Chinese company with military application must be shared with the state,” Monaco said. “So if a company operating in China collects your data, it is a good bet that the Chinese government is accessing it.”

Under former President Donald Trump’s administration, the Justice Department created a China initiative tasked with combating Chinese espionage and intellectual property theft.

President Joe Biden’s Justice Department later scrapped the name and re-focused the initiative amid criticism it was fueling racism by targeting professors at U.S. universities over whether they disclosed financial ties to China.

The department did not back away from continuing to pursue national security cases involving China and its alleged efforts to steal intellectual property or other American data.

The Commerce Department last year imposed new export controls on advanced computing and semiconductor components in a maneuver designed to prevent China from acquiring certain chips.

Monaco said on Thursday that the United States “must also pay attention to how our adversaries can use private investments in their companies to develop the most sensitive technologies, to fuel their drive for a military and national security edge.”

She noted that the Biden administration is “exploring how to monitor the flow of private capital in critical sectors” to ensure it “doesn’t provide our adversaries with a national security advantage.”

A bipartisan group of U.S. lawmakers last year called on Biden to issue an executive order to boost oversight of investments by U.S. companies and individuals in China and other countries.

Some Dogs and Cats Use Words to Express Their Needs and Wants

Imagine if your dog or cat could use words to let you know when they’re angry, lonely or in pain. Well now they can, thanks to an innovative communication tool that’s helping them express themselves more effectively. VOA’s Julie Taboh has more.

Camera: Adam Greenbaum           

Produced by: Julie Taboh, Adam Greenbaum  

Report Says US Justice Department Escalates Apple Probe

The United States Justice Department has in recent months escalated its antitrust probe on Apple Inc., The Wall Street Journal reported on Wednesday citing people familiar with the matter.  

Reuters had previously reported the Justice Department opened an antitrust probe into Apple in 2019. 

The Wall Street Journal report said more litigators have now been assigned, while new requests for documents and consultations have been made with all the companies involved. 

The probe will also look at whether Apple’s mobile operating system, iOS, is anti-competitive, favoring its own products over those of outside developers, the report added. 

The Justice Department declined to comment, while Apple did not immediately respond to a request for comment. 

Elon Musk Hopes to Have Twitter CEO Toward the End of Year 

Billionaire Elon Musk said Wednesday that he anticipates finding a CEO for Twitter “probably toward the end of this year.”

Speaking via a video call to the World Government Summit in Dubai, Musk said making sure the platform can function remained the most important thing for him.

“I think I need to stabilize the organization and just make sure it’s in a financial healthy place,” Musk said when asked about when he’d name a CEO. “I’m guessing probably toward the end of this year would be good timing to find someone else to run the company.”

Musk, 51, made his wealth initially on the finance website PayPal, then created the spacecraft company SpaceX and invested in the electric car company Tesla. In recent months, however, more attention has been focused on the chaos surrounding his $44 billion purchase of the microblogging site Twitter.

Meanwhile, the Ukrainian military’s use of Musk’s satellite internet service Starlink as it defends itself against Russia’s ongoing invasion has put Musk off and on at the center of the war.

Musk offered a wide-ranging 35-minute discussion that touched on the billionaire’s fears about artificial intelligence, the collapse of civilization and the possibility of space aliens. But questions about Twitter kept coming back up as Musk described both Tesla and SpaceX as able to function without his direct, day-to-day involvement.

“Twitter is still somewhat a startup in reverse,” he said. “There’s work required here to get Twitter to sort of a stable position and to really build the engine of software engineering.” 

Musk also sought to portray his takeover of San Francisco-based Twitter as a cultural correction. 

“I think that the general idea is just to reflect the values of the people as opposed to imposing the values of essentially San Francisco and Berkeley, which are so somewhat of a niche ideology as compared to the rest of the world,” he said. “And, you know, Twitter was, I think, doing a little too much to impose a niche.”

Musk’s takeover at Twitter has seen mass firings and other cost-cutting measures. Musk, who is on the hook for about $1 billion in yearly interest payments for his purchase, has been trying to find way to maximize profits at the company.

However, some of Musk’s decisions have conflicted with the reasons that journalists, governments and others rely on Twitter as an information-sharing platform.

Musk on Wednesday described the need for users to rely on Twitter for trusted information from verified accounts. However, a confused rollout to a paid verified account system saw some impersonate famous companies, leading to a further withdrawal of needed advertising cash to the site.

“Twitter is certainly quite the rollercoaster,” he acknowledged.

Forbes estimates Musk’s wealth at just under $200 billion. The Forbes analysis ranks Musk as the second-wealthiest person on Earth, just behind French luxury brand magnate Bernard Arnault. 

But Musk also has become a thought leader for some as well, albeit an oracle that is trying to get six hours of sleep a night despite the challenges at Twitter.

Musk described his children as being “programmed by Reddit and YouTube.” However, he criticized the Chinese-made social media app TikTok.

“TikTok has a lot of very high usage (but) I often hear people say, ‘Well, I spent two hours on TikTok, but I regret those two hours,’” Musk said. “We don’t want that to be the case with Twitter.”

TikTok, owned by Beijing-based ByteDance, did not immediately respond to a request for comment. 

Musk warned that artificial intelligence should be regulated “very carefully,” describing it as akin to the promise of nuclear power but the danger of atomic bombs. He also cautioned against having a single civilization or “too much cooperation” on Earth, saying it could “collapse” a society that’s like a “tiny candle in a vast darkness.”

And when asked about the existence of aliens, Musk had a firm response.

“The crazy thing is, I’ve seen no evidence of alien technology or alien life whatsoever. And I think I’d know because of SpaceX,” he said. “I don’t think anybody knows more about space, you know, than me.” 

11 States Consider ‘Right to Repair’ for Farming Equipment

On Colorado’s northeastern plains, where the pencil-straight horizon divides golden fields and blue sky, a farmer named Danny Wood scrambles to plant and harvest proso millet, dryland corn and winter wheat in short, seasonal windows. That is until his high-tech Steiger 370 tractor conks out. 

The tractor’s manufacturer doesn’t allow Wood to make certain fixes himself, and last spring his fertilizing operations were stalled for three days before the servicer arrived to add a few lines of missing computer code for $950. 

“That’s where they have us over the barrel, it’s more like we are renting it than buying it,” said Wood, who spent $300,000 on the used tractor. 

Wood’s plight, echoed by farmers across the country, has pushed lawmakers in Colorado and 10 other states to introduce bills that would force manufacturers to provide the tools, software, parts and manuals needed for farmers to do their own repairs — thereby avoiding steep labor costs and delays that imperil profits. 

“The manufacturers and the dealers have a monopoly on that repair market because it’s lucrative,” said Rep. Brianna Titone, a Democrat and one of the bill’s sponsors. “[Farmers] just want to get their machine going again.” 

In Colorado, the legislation is largely being pushed by Democrats, while their Republican colleagues find themselves stuck in a tough spot: torn between right-leaning farming constituents asking to be able to repair their own machines and the manufacturing businesses that oppose the idea. 

The manufacturers argue that changing the current practice with this type of legislation would force companies to expose trade secrets. They also say it would make it easier for farmers to tinker with the software and illegally crank up the horsepower and bypass the emissions controller — risking operators’ safety and the environment. 

Similar arguments around intellectual property have been leveled against the broader campaign called ‘right to repair,’ which has picked up steam across the country — crusading for the right to fix everything from iPhones to hospital ventilators during the pandemic. 

In 2011, Congress tried passing a right to repair law for car owners and independent servicers. That bill did not pass, but a few years later, automotive industry groups agreed to a memorandum of understanding to give owners and independent mechanics — not just authorized dealerships — access to tools and information to fix problems. 

In 2021, the Federal Trade Commission pledged to beef up its right to repair enforcement at the direction of President Joe Biden. And just last year, Titone sponsored and passed Colorado’s first right to repair law, empowering people who use wheelchairs with the tools and information to fix them. 

For the right to repair farm equipment — from thin tractors used between grape vines to behemoth combines for harvesting grain that can cost over half a million dollars — Colorado is joined by 10 states including Florida, Maryland, Missouri, New Jersey, Texas and Vermont. 

Many of the bills are finding bipartisan support, said Nathan Proctor, who leads Public Interest Research Group’s national right to repair campaign. But in Colorado’s House committee on agriculture, Democrats pushed the bill forward in a 9-4 vote along party lines, with Republicans in opposition even though the bill’s second sponsor is Republican Representative Ron Weinberg. 

“That’s really surprising, and that upset me,” said the Republican farmer Wood. 

Wood’s tractor, which flies an American flag reading “Farmers First,” isn’t his only machine to break down. His grain harvesting combine was dropping into idle, but the servicer took five days to arrive on Wood’s farm — a setback that could mean a hail storm decimates a wheat field or the soil temperature moves beyond the Goldilocks zone for planting. 

“Our crop is ready to harvest and we can’t wait five days, but there was nothing else to do,” said Wood. “When it’s broke down you just sit there and wait and that’s not acceptable. You can be losing $85,000 a day.” 

Representative Richard Holtorf, the Republican who represents Wood’s district and is a farmer himself, said he’s being pulled between his constituents and the dealerships in his district covering the largely rural northeast corner of the state. He voted against the measure because he believes it will financially hurt local dealerships in rural areas and could jeopardize trade secrets. 

“I do sympathize with my farmers,” Holtorf said, but he added, “I don’t think it’s the role of government to be forcing the sale of their intellectual property.”  

At the packed hearing last week that spilled into a second room in Colorado’s Capitol, the core concerns raised in testimony were farmers illegally slipping around the emissions control and cranking up the horsepower. 

“I know growers, if they can change horsepower and they can change emissions they are going to do it,” said Russ Ball, sales manager at 21st Century Equipment, a John Deere dealership in Western states. 

The bill’s proponents acknowledged that the legislation could make it easier for operators to modify horsepower and emissions controls but argued that farmers are already able to tinker with their machines and doing so would remain illegal. 

This January, the Farm Bureau and the farm equipment manufacturer John Deere did sign a memorandum of understanding — a right to repair agreement made in the free market and without government intervention. The agreement stipulates that John Deere will share some parts, diagnostic and repair codes and manuals to allow farmers to make their own fixes. 

The Colorado bill’s detractors laud that agreement as a strong middle ground while Titone said it wasn’t enough, evidenced by six of Colorado’s biggest farmworker associations that support the bill. 

Proctor, who is tracking 20 right to repair proposals in a number of industries across the country, said the memorandum of understanding has fallen far short. 

“Farmers are saying no,” Proctor said. “We want the real thing.” 

China-Owned Parent Company of TikTok Among Top Spenders on Internet Lobbying

ByteDance, the Chinese parent company of social media platform TikTok, has dramatically upped its U.S. lobbying effort since 2020 as U.S.-China relations continue to sour and is now the fourth-largest Internet company in spending on federal lobbying as of last year, according to newly released data.

Publicly available information collected by OpenSecrets, a Washington nonprofit that tracks campaign finance and lobbying data, shows that ByteDance and its subsidiaries, including TikTok, the wildly popular short video app, have spent more than $13 million on U.S. lobbying since 2020. In 2022 alone, Fox News reported, the companies spent $5.4 million on lobbying.

Only Amazon.com ($19.7 million) and the parent companies of Google ($11 million) and Facebook ($19 million) spent more, according to OpenSecrets.

In the fourth quarter of 2022, ByteDance spent $1.2 million on lobbying, according to Fox News.

The lobbyists hired by ByteDance include former U.S. senators Trent Lott and John Breaux; David Urban, a former senior adviser to Donald Trump’s 2016 presidential campaign who was also a former chief of staff for the late Senator Arlen Specter; Layth Elhassani, special assistant to President Barack Obama in the White House Office of Legislative Affairs; and Samantha Clark, former deputy staff director of the U.S. Senate Armed Services Committee.

In November, TikTok hired Jamal Brown, a deputy press secretary at the Pentagon who was national press secretary for Joe Biden’s presidential campaign, to manage policy communications for the Americas, with a focus on the U.S., according to Politico.

“This is kind of the template for how modern tech lobbying goes,” Dan Auble, a senior researcher at Open Secrets, told Vox. “These companies come on the scene and suddenly start spending substantial amounts of money. And ByteDance has certainly done that.”

U.S. officials have criticized TikTok as a security risk due to ties between ByteDance and the Chinese government. The worry is that user data collected by TikTok could be passed to Beijing, so lawmakers have been trying to regulate or even ban the app in the U.S.

In 2019, TikTok paid a $5.7 million fine as part of a settlement with the Federal Trade Commission over violating children’s privacy rights. The Trump administration attempted unsuccessfully to ban downloads of TikTok from app stores and outlaw transactions between Americans and ByteDance.

As of late December, TikTok has been banned on federally managed devices, and 19 states had at least partially blocked the app from state-managed devices.

The number of federal bills that ByteDance has been lobbying on increased to 14 in 2022 from eight in 2020.

With TikTok CEO Shou Zi Chew scheduled to testify before the U.S. House of Representatives Energy and Commerce Committee on March 23, and a House of Representatives Foreign Affairs Committee vote in March on a bill that would ban the use of TikTok in the U.S., the company is expected to further expand its U.S. influence campaign.

Erich Andersen, general counsel and head of corporate affairs at ByteDance and TikTok, told the New York Times in January that “it was necessary for us to accelerate our own explanation of what we were prepared to do and the level of commitments on the national security process.”

TikTok has been met with a mixed response to its efforts to prove that its operations in the U.S. are outside of Beijing’s sphere of influence.

Michael Beckerman, who oversees public policy for the Americas at TikTok, met with Mike Gallagher, chairman of the U.S. House of Representatives Select Committee on China Affairs, on February 1 to explain the company’s U.S. data security plans.

According to Reuters, Gallagher’s spokesperson, Jordan Dunn, said after the meeting that the lawmaker “found their argument unpersuasive.”

Congressman Ken Buck and Senator Josh Hawley on January 25 introduced a bill, No TikTok on United States Devices Act, which will instruct President Joe Biden to use the International Emergency Economic Powers to prohibit downloads of TikTok and ban commercial activity with ByteDance.

Joel Thayer, president of the Digital Progress Institute and a telecom regulation lawyer, told VOA Mandarin that he doubted the Buck-Hawley bill would become law. He said that calls to ban TikTok began during the Trump administration, yet TikTok has remained a visible and influential presence in the U.S.

James Lewis, director of the CSIS Technology and Public Policy Program, told VOA Mandarin, “An outright ban will be difficult because TikTok is speech, which is protected speech. But it [the U.S. government] can ban financial transactions, that’s possible.”

Senators Marco Rubio and Angus King reintroduced bipartisan legislation on February 10 to ban TikTok and other similar apps from operating in the U.S. by “blocking and prohibiting all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern unless they fully divest of dangerous foreign ownership.”

The Committee on Foreign Investment in the United States (CFIUS), an interagency group that reviews transactions involving foreign parties for possible national security threats, ordered ByteDance to divest TikTok in 2020. The two parties have yet to reach an agreement after two years of talks.

Chuck Flint, vice president of strategic relationships at Breitbart News who is also the former chief of staff for Senator Marsha Blackburn, told VOA Mandarin, “I expect that CFIUS will be hesitant to ban TikTok. Anything short of an outright ban will leave China’s TikTok data pipeline in place.”

China experts believe that TikTok wants to reach an agreement with CFIUS rather than being banned from the U.S. or being forced to sell TikTok’s U.S. business to an American company.

Lewis of CSIS said, “Every month that we don’t do CFIUS is a step closer towards some kind of ban.”

Julian Ku, professor of law and faculty director of international programs at Hofstra University, told VOA Mandarin, “The problem is that no matter what they offer, there’s no way to completely shield the data from the Chinese government … as long as there continues to be a shared entity.”

Adrianna Zhang contributed to this report.

Google to Expand Misinformation ‘Prebunking’ in Europe

After seeing promising results in Eastern Europe, Google will initiate a new campaign in Germany that aims to make people more resilient to the corrosive effects of online misinformation.

The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works.

It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies. 

“There’s a real appetite for solutions,” said Beth Goldberg, head of research and development at Jigsaw, an incubator division of Google that studies emerging social challenges. “Using ads as a vehicle to counter a disinformation technique is pretty novel. And we’re excited about the results.”

While belief in falsehoods and conspiracy theories isn’t new, the speed and reach of the internet has given them a heightened power. When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence.

It’s a challenge with few easy solutions. Journalistic fact checks are effective, but they’re labor intensive, aren’t read by everyone, and won’t convince those already distrustful of traditional journalism. Content moderation by tech companies is another response, but it only drives misinformation elsewhere, while prompting cries of censorship and bias.

Prebunking videos, by contrast, are relatively cheap and easy to produce and can be seen by millions when placed on popular platforms. They also avoid the political challenge altogether by focusing not on the topics of false claims, which are often cultural lightning rods, but on the techniques that make viral misinformation so infectious.

Those techniques include fear-mongering, scapegoating, false comparisons, exaggeration and missing context. Whether the subject is COVID-19, mass shootings, immigration, climate change or elections, misleading claims often rely on one or more of these tricks to exploit emotions and short-circuit critical thinking.

Last fall, Google launched the largest test of the theory so far with a prebunking video campaign in Poland, the Czech Republic and Slovakia. The videos dissected different techniques seen in false claims about Ukrainian refugees. Many of those claims relied on alarming and unfounded stories about refugees committing crimes or taking jobs away from residents.

The videos were seen 38 million times on Facebook, TikTok, YouTube and Twitter — a number that equates to a majority of the population in the three nations. Researchers found that compared to people who hadn’t seen the videos, those who did watch were more likely to be able to identify misinformation techniques, and less likely to spread false claims to others.

The pilot project was the largest test of prebunking so far and adds to a growing consensus in support of the theory.

“This is a good news story in what has essentially been a bad news business when it comes to misinformation,” said Alex Mahadevan, director of MediaWise, a media literacy initiative of the Poynter Institute that has incorporated prebunking into its own programs in countries including Brazil, Spain, France and the U.S.

Mahadevan called the strategy a “pretty efficient way to address misinformation at scale, because you can reach a lot of people while at the same time address a wide range of misinformation.”

Google’s new campaign in Germany will include a focus on photos and videos, and the ease with which they can be presented of evidence of something false. One example: Last week, following the earthquake in Turkey, some social media users shared video of the massive explosion in Beirut in 2020, claiming it was actually footage of a nuclear explosion triggered by the earthquake. It was not the first time the 2020 explosion had been the subject of misinformation.

Google will announce its new German campaign Monday ahead of next week’s Munich Security Conference. The timing of the announcement, coming before that annual gathering of international security officials, reflects heightened concerns about the impact of misinformation among both tech companies and government officials.

Tech companies like prebunking because it avoids touchy topics that are easily politicized, said Sander van der Linden, a University of Cambridge professor considered a leading expert on the theory. Van der Linden worked with Google on its campaign and is now advising Meta, the owner of Facebook and Instagram, as well.

Meta has incorporated prebunking into many different media literacy and anti-misinformation campaigns in recent years, the company told The Associated Press in an emailed statement.

They include a 2021 program in the U.S. that offered media literacy training about COVID-19 to Black, Latino and Asian American communities. Participants who took the training were later tested and found to be far more resistant to misleading COVID-19 claims.

Prebunking comes with its own challenges. The effects of the videos eventually wears off, requiring the use of periodic “booster” videos. Also, the videos must be crafted well enough to hold the viewer’s attention, and tailored for different languages, cultures and demographics. And like a vaccine, it’s not 100% effective for everyone.

Google found that its campaign in Eastern Europe varied from country to country. While the effect of the videos was highest in Poland, in Slovakia they had “little to no discernible effect,” researchers found. One possible explanation: The videos were dubbed into the Slovak language, and not created specifically for the local audience.

But together with traditional journalism, content moderation and other methods of combating misinformation, prebunking could help communities reach a kind of herd immunity when it comes to misinformation, limiting its spread and impact.

“You can think of misinformation as a virus. It spreads. It lingers. It can make people act in certain ways,” Van der Linden told the AP. “Some people develop symptoms, some do not. So: if it spreads and acts like a virus, then maybe we can figure out how to inoculate people.”

Russian Spacecraft Loses Pressure; Space Station Crew Safe

An uncrewed Russian supply ship docked at the International Space Station has lost cabin pressure, the Russian space corporation reported Saturday, saying the incident doesn’t pose any danger to the station’s crew.

Roscosmos said the hatch between the station and the Progress MS-21 had been locked so the loss of pressure didn’t affect the orbiting outpost.

“The temperature and pressure on board the station are within norms and there is no danger to health and safety of the crew,” it said in a statement.

The space corporation didn’t say what may have caused the cargo ship to lose pressure.

Roscosmos noted that the cargo ship had already been loaded with waste before its scheduled disposal. The craft is set to be undocked from the station and deorbit to burn in the atmosphere Feb. 18.

The announcement came shortly after a new Russian cargo ship docked smoothly at the station Saturday. The Progress MS-22 delivered almost 3 tons of food, water and fuel along with scientific equipment for the crew.

Roscosmos said that the loss of pressure in the Progress MS-21 didn’t affect the docking of the new cargo ship and “will have no impact on the future station program.”

The depressurization of the cargo craft follows an incident in December with the Soyuz crew capsule, which was hit by a tiny meteoroid that left a small hole in the exterior radiator and sent coolant spewing into space.

Russian cosmonauts Sergey Prokopyev and Dmitri Petelin, and NASA astronaut Frank Rubio were supposed to use the capsule to return to Earth in March, but Russian space officials decided that higher temperatures resulting from the coolant leak could make it dangerous to use.

They decided to launch a new Soyuz capsule February 20 so the crew would have a lifeboat in the event of an emergency. But since it will travel in automatic mode to expedite the launch, a replacement crew will now have to wait until late summer or fall when another capsule is ready. It means that Prokopyev, Petelin and Rubio will have to stay several extra months at the station, possibly pushing their mission to close to a year.

NASA took part in all the discussions and agreed with the plan.

Besides Prokopyev, Petelin and Rubio, the space station is home to NASA astronauts Nicole Mann and Josh Cassada, Russian Anna Kikina, and Japan’s Koichi Wakata. The four rode up on a SpaceX capsule last October.

Several US Universities to Experiment With Micro Nuclear Power 

If your image of nuclear power is giant, cylindrical concrete cooling towers pouring out steam on a site that takes up hundreds of acres of land, soon there will be an alternative: tiny nuclear reactors that produce only one-hundredth the electricity and can even be delivered on a truck.

Small but meaningful amounts of electricity — nearly enough to run a small campus, a hospital or a military complex, for example — will pulse from a new generation of micronuclear reactors. Now, some universities are taking interest.

“What we see is these advanced reactor technologies having a real future in decarbonizing the energy landscape in the U.S. and around the world,” said Caleb Brooks, a nuclear engineering professor at the University of Illinois at Urbana-Champaign.

The tiny reactors carry some of the same challenges as large-scale nuclear, such as how to dispose of radioactive waste and how to make sure they are secure. Supporters say those issues can be managed and the benefits outweigh any risks.

Universities are interested in the technology not just to power their buildings but to see how far it can go in replacing the coal and gas-fired energy that causes climate change. The University of Illinois hopes to advance the technology as part of a clean energy future, Brooks said. The school plans to apply for a construction permit for a high-temperature, gas-cooled reactor developed by the Ultra Safe Nuclear Corporation, and aims to start operating it by early 2028. Brooks is the project lead.

Microreactors will be “transformative” because they can be built in factories and hooked up on site in a plug-and-play way, said Jacopo Buongiorno, professor of nuclear science and engineering at the Massachusetts Institute of Technology. Buongiorno studies the role of nuclear energy in a clean energy world.

“That’s what we want to see, nuclear energy on demand as a product, not as a big mega project,” he said.

Both Buongiorno and Marc Nichol, senior director for new reactors at the Nuclear Energy Institute, view the interest by schools as the start of a trend.

Last year, Penn State University signed a memorandum of understanding with Westinghouse to collaborate on microreactor technology. Mike Shaqqo, the company’s senior vice president for advanced reactor programs, said universities are going to be “one of our key early adopters for this technology.”

Penn State wants to prove the technology so that Appalachian industries, such as steel and cement manufacturers, may be able to use it, said Professor Jean Paul Allain, head of the nuclear engineering department. Those two industries tend to burn dirty fuels and have very high emissions. Using a microreactor also could be one of several options to help the university use less natural gas and achieve its long-term carbon emissions goals, he said.

“I do feel that microreactors can be a game-changer and revolutionize the way we think about energy,” Allain said.

For Allain, microreactors can complement renewable energy by providing a large amount of power without taking up much land. A 10-megawatt microreactor could go on less than an acre, whereas windmills or a solar farm would need far more space to produce 10 megawatts, he added. The goal is to have one at Penn State by the end of the decade.

Purdue University in Indiana is working with Duke Energy on the feasibility of using advanced nuclear energy to meet its long-term energy needs.

Nuclear reactors that are used for research are nothing new on campus. About two dozen U.S. universities have them. But using them as an energy source is new.

Back at the University of Illinois, Brooks explains the microreactor would generate heat to make steam. While the excess heat from burning coal and gas to make electricity is often wasted, Brooks sees the steam production from the nuclear microreactor as a plus, because it’s a carbon-free way to deliver steam through the campus district heating system to radiators in buildings, a common heating method for large facilities in the Midwest and Northeast. The campus has hundreds of buildings.

The 10-megawatt microreactor wouldn’t meet all of the demand, but it would serve to demonstrate the technology, as other communities and campuses look to transition away from fossil fuels, Brooks said.

One company that is building microreactors that the public can get a look at today is Last Energy, based in Washington, D.C. It built a model reactor in Brookshire, Texas that’s housed in an edgy cube covered in reflective metal.

Now it’s taking that apart to test how to transport the unit. A caravan of trucks is taking it to Austin, where company founder Bret Kugelmass is scheduled to speak at the South by Southwest conference and festival.

Kugelmass, a technology entrepreneur and mechanical engineer, is talking with some universities, but his primary focus is on industrial customers. He’s working with licensing authorities in the United Kingdom, Poland and Romania to try to get his first reactor running in Europe in 2025.

The urgency of the climate crisis means zero-carbon nuclear energy must be scaled up soon, he said.

“It has to be a small, manufactured product as opposed to a large, bespoke construction project,” he said.

Traditional nuclear power costs billions of dollars. An example is two additional reactors at a plant in Georgia that will end up costing more than $30 billion.

The total cost of Last Energy’s microreactor, including module fabrication, assembly and site prep work, is under $100 million, the company says.

Westinghouse, which has been a mainstay of the nuclear industry for over 70 years, is developing its “eVinci” microreactor, Shaqqo said, and is aiming to get the technology licensed by 2027.

The Department of Defense is working on a microreactor too. Project Pele is a DOD prototype mobile nuclear reactor under design at the Idaho National Laboratory.

Abilene Christian University in Texas is leading a group of three other universities with the company Natura Resources to design and build a research microreactor cooled by molten salt to allow for high temperature operations at low pressure, in part to help train the next generation nuclear workforce.

But not everyone shares the enthusiasm. Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists, called it “completely unjustified.”

Microreactors in general will require much more uranium to be mined and enriched per unit of electricity generated than conventional reactors do, he said. He said he also expects fuel costs to be substantially higher and that more depleted uranium waste could be generated compared to conventional reactors.

“I think those who are hoping that microreactors are going to be the silver bullet for solving the climate change crisis are simply betting on the wrong horse,” he said.

Lyman also said he fears microreactors could be targeted for a terrorist attack, and some designs would use fuels that could be attractive to terrorists seeking to build crude nuclear weapons. The UCS does not oppose using nuclear power, but wants to make sure it’s safe.

The United States does not have a national storage facility for storing spent nuclear fuel and it’s piling up. Microreactors would only compound the problem and spread the radioactive waste around, Lyman said.

A 2022 Stanford-led study found that smaller modular reactors — the next size up from micro — will generate more waste than conventional reactors. Lead author Lindsay Krall said this week that the design of microreactors would make them subject to the same issue.

Kugelmass sees only promise. Nuclear, he said, has been “totally misunderstood and under leveraged.” It will be “the key pillar of our energy transformation moving forward.”

Australian Defense Department to Remove Chinese-Made Cameras

Australia’s Defense Department will remove surveillance cameras made by Chinese Communist Party-linked companies from its buildings, the government said Thursday after the U.S. and Britain made similar moves.

The Australian newspaper reported Thursday that at least 913 cameras, intercoms, electronic entry systems and video recorders developed and manufactured by Chinese companies Hikvision and Dahua are in Australian government and agency offices, including the Defense Department and the Department of Foreign Affairs and Trade.

Hikvision and Dahua are partly owned by China’s Communist Party-ruled government.

Australian Defense Minister Richard Marles said his department is assessing all its surveillance technology.

“Where those particular cameras are found, they’re going to be removed,” Marles told Australian Broadcasting Corp. “There is an issue here and we’re going to deal with it.”

Asked about Australia’s decision, Chinese Foreign Ministry spokesperson Mao Ning criticized what she called “wrongful practices that overstretch the concept of national security and abuse state power to suppress and discriminate against Chinese enterprises.”

Without mentioning Australia by name, Mao said the Chinese government has “always encouraged Chinese enterprises to carry out foreign investment and cooperation in accordance with market principles and international rules, and on the basis of compliance with local laws.”

“We hope Australia will provide a fair and non-discriminatory environment for the normal operation of Chinese enterprises and do more things that are conducive to mutual trust and cooperation between the two sides,” she told reporters at a daily briefing.

The U.S. government said in November it was banning telecommunications and video surveillance equipment from several prominent Chinese brands including Hikvision and Dahua in an effort to protect the nation’s communications network.

Security cameras made by Hikvision were also banned from British government buildings in November.

An audit in Australia found that Hikvision and Dahua cameras and security equipment were found in almost every department except the Agriculture Department and the Department of Prime Minister and Cabinet.

The Australian War Memorial and National Disability Insurance Agency have said they will remove the Chinese cameras found at their sites, the ABC reported.

Opposition cybersecurity spokesperson James Paterson said he had prompted the audit by asking questions over six months of each federal agency, after the Home Affairs Department was unable to say how many of the cameras, access control systems and intercoms were installed in government buildings.

“We urgently need a plan from the … government to rip every one of these devices out of Australian government departments and agencies,” Paterson said.

Both companies are subject to China’s National Intelligence Law which requires them to cooperate with Chinese intelligence agencies, he said.

“We would have no way of knowing if the sensitive information, images and audio collected by these devices are secretly being sent back to China against the interests of Australian citizens,” Paterson said.

US Students’ ‘Big Idea’ Could Help NASA Explore the Moon

Last November, Northeastern University student Andre Neto Caetano watched the live, late-night launch of NASA’s Artemis 1 from Kennedy Space Center in Florida on a cellphone placed on top of a piano in the lobby of the hotel where he was staying in California.

“I had, not a flashback, but a flash-forward of seeing maybe Artemis 4 or something, and COBRA, as part of the payload, and it is on the moon doing what it was meant to do,” Caetano told VOA during a recent Skype interview.

Artemis 1 launched the night before Caetano and his team of scholars presented their Crater Observing Bio-inspired Rolling Articulator (COBRA) rover project at NASA’s Breakthrough, Innovative, and Game Changing (BIG) Idea Challenge. The team hoped to impress judges assembled in the remote California desert.

“They were skeptical that the mobility solutions that we were proposing would actually work,” he said.

That skepticism, said Caetano, came from the simplicity of their design.

“It’s a robot that moves like a snake, and then the head and the tail connect, and then it rolls,” he said.

NASA’s BIG Idea Challenge prompted teams of college students to compete to develop solutions for the agency’s ambitious goals in the upcoming Artemis missions to the moon, which Caetano explains are “extreme lunar terrain mobility.”

Northeastern’s COBRA is designed to move through the fine dust, or regolith, of the lunar surface to probe the landscape for interesting features, including ice and water, hidden in the shadows of deep craters.

“They never could … deploy a robot or a ground vehicle that can sort of negotiate the environment and get to the bottom of these craters and look for ice water content,” said professor Alireza Ramezani, who advises the COBRA team and has worked with robotic designs that mimic the movements of real organisms, something Caetano said formed a baseline for their research.

“With him building a robot dog and robot bat, we knew we wanted to have some ‘bioinspiration’ in our project,” Caetano said.

Using biology as the driving force behind COBRA’s design was also something Ramezani hoped would win over judges in NASA’s competition.

“Our robot sort of tumbled 80 to 90 feet (24-27 meters) down this hill and that … impressed the judges,” he told VOA. “We did this with minimum energy consumption and within, like, 10 or 15 seconds.”

Caetano said COBRA weighs about 7 kilograms, “so the fact that COBRA is super light brings a benefit to it, as well.”

Ramezani added that COBRA is also cost-effective.

“If you want to have a space-worthy platform, it’s going to be in the order of $100,000 to $200,000. You can have many of these systems tumbling down these craters,” he said.

The Northeastern team’s successful COBRA test put to rest any lingering skepticism, sending them to the top of NASA’s 2022 BIG Idea competition and hopefully — in the not-too-distant future — to the top of NASA’s Space Launch System on its way to the moon.

“I’m not saying this, our judges said this. It’s potentially going to transform the way future space exploration systems look like,” said Ramezani. “They are even talking to some of our partners to see if we can increase technology readiness of the system, make it space worthy, and deploy it to the moon.”

Which is why, despite his impending graduation later this year, Caetano plans to continue developing COBRA alongside his teammates.

“Because we brought it to life together, the idea of just fully abandoning it at graduation probably doesn’t appeal to most of us,” Caetano said. “In some way or another, we still want to be involved in the project, in making sure that … we are still the ones who put it on the moon at some point.”

That could happen as soon as 2025, the year NASA hopes to return astronauts to the lunar surface in the Artemis program.

Australia to Review Chinese-Made Cameras in Defense Offices

The Australian government will examine surveillance technology used in offices of the defense department, Defense Minister Richard Marles said Thursday, amid reports the Chinese-made cameras installed there raised security risks.

The move comes after Britain in November asked its departments to stop installing Chinese-linked surveillance cameras at sensitive buildings. Some U.S. states have banned vendors and products from several Chinese technology companies.

“This is an issue and … we’re doing an assessment of all the technology for surveillance within the defense (department) and where those particular cameras are found, they are going to be removed,” Marles told ABC Radio in an interview.

Opposition lawmaker James Paterson said Thursday his own audit revealed almost 1,000 units of equipment by Hangzhou Hikvision Digital Technology and Dahua Technology, two partly state-owned Chinese firms, were installed across more than 250 Australian government offices.

Paterson, the shadow minister for cybersecurity and countering foreign interference, urged the government to urgently come up with a plan to remove all such cameras.

Marles said the issue was significant but “I don’t think we should overstate it.”

Australian media reported on Wednesday that the national war memorial in Canberra would remove several Chinese-made security cameras installed on the premises over concerns of spying.

Hikvision and Dahua Technology did not immediately respond to requests seeking comment.

Australia and China have been looking to mend diplomatic ties, which soured after Canberra in 2018 banned Huawei from its 5G broadband network. That cooled further after Australia called for an independent investigation into the origins of COVID-19.

China responded with tariffs on several Australian commodities.

Prime Minister Anthony Albanese said he was not concerned about how Beijing might react to the removal of cameras.

“We act in accordance with Australia’s national interest. We do so transparently and that’s what we will continue to do,” Albanese told reporters.

Ex-Twitter Execs Deny Pressure to Block Hunter Biden Story

Former Twitter executives conceded Wednesday they made a mistake by blocking a story about Hunter Biden, the son of U.S. President Joe Biden, from the social media platform in the run-up to the 2020 election, but adamantly denied Republican assertions they were pressured by Democrats and law enforcement to suppress the story.

“The decisions here aren’t straightforward, and hindsight is 20/20,” Yoel Roth, Twitter’s former head of trust and safety, testified to Congress. “It isn’t obvious what the right response is to a suspected, but not confirmed, cyberattack by another government on a presidential election.”

He added, “Twitter erred in this case because we wanted to avoid repeating the mistakes of 2016.”

The three former executives appeared before the House Oversight and Accountability Committee to testify for the first time about the company’s decision to initially block from Twitter a New York Post article in October 2020 about the contents of a laptop belonging to Hunter Biden.

Emboldened by Twitter’s new leadership in billionaire Elon Musk — whom they see as more sympathetic to conservatives than the company’s previous leadership — Republicans used the hearing to push a long-standing and unproven theory that social media companies including Twitter are biased against them.

Committee Chairman Representative James Comer said the hearing is the panel’s “first step in examining the coordination between the federal government and Big Tech to restrict protected speech and interfere in the democratic process.”

Alleged political bias

The hearing continues a yearslong trend of Republican leaders calling tech company leaders to testify about alleged political bias. Democrats, meanwhile, have pressed the companies on the spread of hate speech and misinformation on their platforms.

The witnesses Republicans subpoenaed were Roth, Vijaya Gadde, Twitter’s former chief legal officer, and James Baker, the company’s former deputy general counsel.

Democrats brought a witness of their own, Anika Collier Navaroli, a former employee with Twitter’s content moderation team. She testified last year to the House committee that investigated the January 6 Capitol riot about Twitter’s preferential treatment of Donald Trump until it banned the then-president from the site two years ago.

‘A bizarre political stunt’

The White House criticized congressional Republicans for staging “a bizarre political stunt,” hours after Biden’s State of the Union address where he detailed bipartisan progress in his first two years in office.

“This appears to be the latest effort by the House Republican majority’s most extreme MAGA members to question and relitigate the outcome of the 2020 election,” White House spokesperson Ian Sams said in a statement Wednesday. “This is not what the American people want their leaders to work on.”

The New York Post reported weeks before the 2020 presidential election that it had received from Trump’s personal lawyer, Rudy Giuliani, a copy of a hard drive from a laptop that Hunter Biden had dropped off 18 months earlier at a Delaware computer repair shop and never retrieved. Twitter blocked people from sharing links to the story for several days.

“You exercised an amazing amount of clout and power over the entire American electorate by even holding (this story) hostage for 24 hours and then reversing your policy,” Representative Andy Biggs said to the panel of witnesses.

Months later, Twitter’s then-CEO, Jack Dorsey, called the company’s communications around the Post article “not great.” He added that blocking the article’s URL with “zero context” around why it was blocked was “unacceptable.”

The newspaper story was greeted at the time with skepticism because of questions about the laptop’s origins, including Giuliani’s involvement, and because top officials in the Trump administration had already warned that Russia was working to denigrate Joe Biden before the White House election.

The Kremlin interfered in the 2016 race by hacking Democratic emails that were subsequently leaked, and fears that Russia would meddle again in the 2020 race were widespread across Washington.

Musk releases ‘Twitter files’

Just last week, lawyers for the younger Biden asked the U.S. Justice Department to investigate people who say they accessed his personal data. But they did not acknowledge that the data came from a laptop Hunter Biden is purported to have dropped off at a computer repair shop.

The issue was also reignited recently after Musk took over Twitter as CEO and began to release a slew of company information to independent journalists, what he has called the “Twitter Files.”

The documents and data largely show internal debates among employees over the decision to temporarily censor links to the Hunter Biden story. The tweet threads lacked substantial evidence of a targeted influence campaign from Democrats or the FBI, which has denied any involvement in Twitter’s decision-making.

Witness often targeted

One of Wednesday’s witnesses, Baker, has been a frequent target of Republican scrutiny.

Baker was the FBI’s general counsel during the opening of two of the bureau’s most consequential investigations in history: the Hillary Clinton investigation and a separate inquiry into potential coordination between Russia and Trump’s 2016 presidential campaign. Republicans have long criticized the FBI’s handling of both investigations.

Baker denied any wrongdoing during his two years at Twitter and said that despite disagreeing with the decision to block links to the Post story, “I believe that the public record reveals that my client acted in a manner that was fully consistent with the First Amendment.”

There has been no evidence that Twitter’s platform is biased against conservatives; studies have found the opposite when it comes to conservative media in particular. But the issue continues to preoccupy Republican members of Congress.

And some experts said questions around government influence on Big Tech’s content moderation are legitimate.

Ex-Twitter Executives to Testify About Hunter Biden Story Before House Panel

Former Twitter employees are expected to testify next week before the House Oversight Committee about the social media platform’s handling of reporting on President Joe Biden’s son, Hunter Biden.

The scheduled testimony, confirmed by the committee Monday, will be the first time the three former executives will appear before Congress to discuss the company’s decision to initially block from Twitter a New York Post article regarding Hunter Biden’s laptop in the weeks before the 2020 election.

Republicans have said the story was suppressed for political reasons, though no evidence has been released to support that claim. The witnesses for the February 8 hearing are expected to be Vijaya Gadde, former chief legal officer; James Baker, former deputy general counsel; and Yoel Roth, former head of safety and integrity.

The hearing is among the first of many in a GOP-controlled House to be focused on Biden and his family, as Republicans wield the power of their new, albeit slim, majority.

The New York Post first reported in October 2020 that it had received from former President Donald Trump’s personal attorney, Rudy Giuliani, a copy of a hard drive of a laptop that Hunter Biden had dropped off 18 months earlier at a Delaware computer repair shop and never retrieved. Twitter initially blocked people from sharing links to the story for several days.

Months later, Twitter’s then-CEO Jack Dorsey called the company’s communications around the Post article “not great.” He added that blocking the article’s URL with “zero context” around why it was blocked was “unacceptable.”

The Post article at the time was greeted with skepticism due to questions about the laptop’s origins, including Giuliani’s involvement, and because top officials in the Trump administration already had warned that Russia was working to denigrate Joe Biden ahead of the 2020 election.

The Kremlin had interfered in the 2016 race by hacking Democratic emails that were subsequently leaked, and there were widespread fears across Washington that Russia would meddle again in the 2020 race.

“This is why we’re investigating the Biden family for influence peddling,” Rep. James Comer, chairman of the Oversight committee, said at a press event Monday morning. “We want to make sure that our national security is not compromised.”

The White House has sought to discredit the Republican probes into Hunter Biden, calling them “divorced-from-reality political stunts.”

Nonetheless, Republicans now hold subpoena power in the House, giving them the authority to compel testimony and conduct an aggressive investigation. GOP staff has spent the past year analyzing messages and financial transactions found on the laptop that belonged to the president’s younger son. Comer has previously said the evidence they have compiled is “overwhelming,” but did not offer specifics.

Comer has pledged there won’t be hearings regarding the Biden family until the committee has the evidence to back up any claims of alleged wrongdoing. He also acknowledged the stakes are high whenever an investigation centers on the leader of a political party.

On Monday, the Kentucky Republican, speaking at a National Press Club event, said that he could not guarantee a subpoena of Hunter Biden during his term. “We’re going to go where the investigation leads us. Maybe there’s nothing there.”

Comer added, “We’ll see.” 

Microsoft bakes ChatGPT-Like Tech into Search Engine Bing

Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence.

The revamping of Microsoft’s second-place search engine could give the software giant a head start against other tech companies in capitalizing on the worldwide excitement surrounding ChatGPT, a tool that’s awakened millions of people to the possibilities of the latest AI technology.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. Microsoft announced the new technology at an event Tuesday at its headquarters in Redmond, Washington.

Microsoft said a public preview of the new Bing was to launch Tuesday for users who sign up for it, but the technology will scale to millions of users in coming weeks.

Yusuf Mehdi, corporate vice president and consumer chief marketing officer, said the new Bing will go live for desktop on limited preview. Everyone can try a limited number of queries, he said.

The strengthening partnership with ChatGPT-maker OpenAI has been years in the making, starting with a $1 billion investment from Microsoft in 2019 that led to the development of a powerful supercomputer specifically built to train the San Francisco startup’s AI models.

While it’s not always factual or logical, ChatGPT’s mastery of language and grammar comes from having ingested a huge trove of digitized books, Wikipedia entries, instruction manuals, newspapers and other online writings.

The shift to making search engines more conversational — able to confidently answer questions rather than offering links to other websites — could change the advertising-fueled search business, but also poses risks if the AI systems don’t get their facts right.

Their opaqueness also makes it hard to source back to the original human-made images and texts they’ve effectively memorized.

Google has been cautious about such moves. But in response to pressure over ChatGPT’s popularity, Google CEO Sundar Pichai on Monday announced a new conversational service named Bard that will be available exclusively to a group of “trusted testers” before being widely released later this year.

Google’s chatbot is supposed to be able to explain complex subjects such as outer space discoveries in terms simple enough for a child to understand. It also claims the service will also perform other more mundane tasks, such as providing tips for planning a party, or lunch ideas based on what food is left in a refrigerator. Other tech rivals such as Facebook parent Meta and Amazon also worked on similar technology, but Microsoft’s latest moves aim to position it at he center of the ChatGPT zeitgeist.

Microsoft disclosed in January that it was pouring billions more dollars into OpenAI as it looks to fuse the technology behind ChatGPT, the image-generator DALL-E and other OpenAI innovations into an array of Microsoft products tied to its cloud computing platform and its Office suite of workplace products like email and spreadsheets.

The most surprising might be the integration with Bing, which is the second-place search engine in many markets but has never come close to challenging Google’s dominant position.

Bing launched in 2009 as a rebranding of Microsoft’s earlier search engines and was run for a time by Nadella, years before he took over as CEO. Its significance was boosted when Yahoo and Microsoft signed a deal for Bing to power Yahoo’s search engine, giving Microsoft access to Yahoo’s greater search share. Similar deals infused Bing into the search features for devices made by other companies, though users wouldn’t necessarily know that Microsoft was powering their searches.

By making it a destination for ChatGPT-like conversations, Microsoft could invite more users to give Bing a try.

On the surface, at least, a Bing integration seems far different from what OpenAI has in mind for its technology.

OpenAI has long voiced an ambitious vision for safely guiding what’s known as AGI, or artificial general intelligence, a not-yet-realized concept that harkens back to ideas from science fiction about human-like machines. OpenAI’s website describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI started out as a nonprofit research laboratory when it launched in December 2015 with backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT model for generating human-like paragraphs of readable text.

OpenAI’s other products include the image-generator DALL-E, first released in 2021, the computer programming assistant Codex and the speech recognition tool Whisper.

Technology Brings Hope to Ukraine’s Wounded

The war in Ukraine has left thousands of wounded soldiers, many of whom require the latest technologies to heal and return to normal life. For VOA, Anna Chernikova visited a rehabilitation center near Kyiv, where cutting edge technology and holistic care are giving soldiers hope. (Myroslava Gongadze contributed to this report. Camera: Eugene Shynkar )       

Seeing Is Believing? Global Scramble to Tackle Deepfakes

Chatbots spouting falsehoods, face-swapping apps crafting porn videos, and cloned voices defrauding companies of millions — the scramble is on to rein in AI deepfakes that have become a misinformation super spreader.

Artificial Intelligence is redefining the proverb “seeing is believing,” with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.

China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialized skill but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere — it was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized text, audio and video, the potential for misuse in identity theft, financial fraud and tarnishing reputations has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography “Mein Kampf.”

The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse,” a scenario where many people are unable to distinguish fact from fiction.

“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.

That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the COVID-19 vaccine was behind his on-field collapse baselessly labeled his video a deepfake.

‘Super spreader’

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid “any confusion.”

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability.”

In the United States, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislative overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussions over its proposed “AI Act.”

The law, which the EU is racing to pass this year, will require users to disclose deepfakes but many fear the legislation could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparency? That is the real question right now,” Jason Davis, a research professor at Syracuse University, told AFP.

“The [detection] tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cyber security, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the U.S.-based OpenAI that is capable of generating strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinformation super spreader,” said most of the chatbot’s responses to prompts related to topics such as COVID-19 and school shootings were “eloquent, false and misleading.”

“The results confirm fears … about how the tool can be weaponized in the wrong hands,” NewsGuard said.

Musk Found Not Liable in Tesla Tweet Trial

Jurors on Friday cleared Elon Musk of liability for investors’ losses in a fraud trial over his 2018 tweets falsely claiming that he had funding in place to take Tesla private.

The tweets sent the Tesla share price on a rollercoaster ride, and Musk was sued by shareholders who said the tycoon acted recklessly in an effort to squeeze investors who had bet against the company.

Jurors deliberated for barely two hours before returning to the San Francisco courtroom to say they unanimously agreed that neither Musk nor the Tesla board perpetrated fraud with the tweets and in their aftermath.

“Thank goodness, the wisdom of the people has prevailed!” tweeted Musk, who had tried but failed to get the trial moved to Texas on the grounds jurors in California would be biased against him.

“I am deeply appreciative of the jury’s unanimous finding of innocence in the Tesla 420 take-private case.”

Attorney Nicholas Porritt, who represents Glen Littleton and other investors in Tesla, had argued in court that the case was about making sure the rich and powerful have to abide by the same stock market rules as everyone else.

“Elon Musk published tweets that were false with reckless disregard as to their truth,” Porritt told the panel of nine jurors during closing arguments.

Porritt pointed to expert testimony estimating that Musk’s claim about funding, which turned out not to be true, cost investors billions of dollars overall and that Musk and the Tesla board should be made to pay damages.

But Musk attorney Alex Spiro successfully countered that the billionaire may have erred on wording in a hasty tweet, but that he did not set out to deceive anyone.

Spiro also portrayed the mercurial entrepreneur, who now owns Twitter, as having had a troubled childhood and having come to the United States as a poor youth chasing dreams.

No joke

Musk testified during three days on the witness stand that his 2018 tweet about taking Tesla private at $420 a share was no joke and that Saudi Arabia’s sovereign wealth fund was serious about helping him do it.

“To Elon Musk, if he believes it or even just thinks about it then it’s true no matter how objectively false or exaggerated it may be,” Porritt told jurors.

Tesla and its board were also to blame, because they let Musk use his Twitter account to post news about the company, Porritt argued.

The case revolved around a pair of tweets in which Musk said “funding secured” for a project to buy out the publicly traded electric automaker, then in a second tweet added that “investor support is confirmed.”

“He wrote two words ‘funding secured’ that were technically inaccurate,” Spiro said of Musk while addressing jurors.

“Whatever you think of him, this isn’t a bad tweeter trial, it’s a ‘did they prove this man committed fraud?’ trial.”

Musk did not intend to deceive anyone with the tweets and had the connections and wealth to take Tesla private, Spiro contended.

During the trial playing out in federal court in San Francisco, Spiro said that even though the tweets may have been a “reckless choice of words,” they were not fraud.

“I’m being accused of fraud; it’s outrageous,” Musk said while testifying in person.

Musk said he fired off the tweets at issue after learning of a Financial Times story about a Saudi Arabian investment fund wanting to acquire a stake in Tesla.

The trial came at a sensitive time for Musk, who has dominated the headlines for his chaotic takeover of Twitter where he has laid off more than half of the 7,500 employees and scaled down content moderation. 

ChatGPT: The Promises, Pitfalls and Panic

Excitement around ChatGPT — an easy to use AI chatbot that can deliver an essay or computer code upon request and within seconds — has sent schools into panic and turned Big Tech green with envy.

The potential impact of ChatGPT on society remains complicated and unclear even as its creator Wednesday announced a paid subscription version in the United States.

Here is a closer look at what ChatGPT is (and is not):

Is this a turning point?  

It is entirely possible that November’s release of ChatGPT by California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.  

What is less clear is whether ChatGPT is actually a breakthrough with some critics calling it a brilliant PR move that helped OpenAI score billions of dollars in investments from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and professor at New York University, believes “ChatGPT is not a particularly interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT is void of “any internal model of the world” and is merely churning “one word after another” based on inputs and patterns found on the internet.

“When working with these AI models, you have to remember that they’re slot machines, not calculators,” warned Haomiao Huang of Kleiner Perkins, the Silicon Valley venture capital firm.

“Every time you ask a question and pull the arm, you get an answer that could be marvelous… or not… The failures can be extremely unpredictable,” Huang wrote in Ars Technica, the tech news website.

Just like Google

ChatGPT is powered by an AI language model that is nearly three years old — OpenAI’s GPT-3 — and the chatbot only uses a part of its capability.  

The true revolution is the humanlike chat, said Jason Davis, research professor at Syracuse University.

“It’s familiar, it’s conversational and guess what? It’s kind of like putting in a Google search request,” he said.

ChatGPT’s rockstar-like success even shocked its creators at OpenAI, which received billions in new financing from Microsoft in January.

“Given the magnitude of the economic impact we expect here, more gradual is better,” OpenAI CEO Sam Altman said in an interview to StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago… so the incremental update from that to ChatGPT, I felt like should have been predictable and I want to do more introspection on why I was sort of miscalibrated on that,” he said.

The risk, Altman added, was startling the public and policymakers and on Tuesday his company unveiled a tool for detecting text generated by AI amid concerns from teachers that students may rely on artificial intelligence to do their homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting breathlessly to feel disruption caused by ChatGPT. OpenAI just launched a paid version of the chatbot – $20 per month for an improved and faster service.

For now, officially, the first significant application of OpenAI’s tech will be for Microsoft software products.  

Though details are scarce, most assume that ChatGPT-like capabilities will turn up on the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I wanted to write with a prompt,” said Davis.

He believes influencers on TikTok and Twitter will be the earliest adopters of this so-called generative AI since going viral requires huge amounts of content and ChatGPT can take care of that in no time.

This of course raises the specter of disinformation and spamming carried out at an industrial scale.  

For now, Davis said the reach of ChatGPT is very limited by computing power, but once this is ramped up, the opportunities and potential dangers will grow exponentially.

And much like the ever imminent arrival of self-driving cars that never quite happens, experts disagree on whether that is a question of months or years.

Ridicule

LeCun said Meta and Google have refrained from releasing AI as potent as ChatGPT out of fear of ridicule and backlash.

Quieter releases of language-based bots – like Meta’s Blenderbot or Microsoft’s Tay for example – were quickly shown capable of generating racist or inappropriate content.

Tech giants have to think hard before releasing something “that is going to spew nonsense” and disappoint, he said.

Zimbabwe Plans to Build $60 Billion ‘Cyber City’ to Ease Harare Congestion

Zimbabwe plans to build “Zim Cyber City,” a modern capital expected to cost up to $60 billion in raised funds and include new government buildings and a presidential palace. Critics are blasting the plan as wasteful when more than half the population lives in poverty and the government has let the current capital, Harare, fall apart. Columbus Mavhunga reports from Mount Hampden, Zimbabwe. Camera: Blessing Chigwenhembe