Category Archives: Technology

Silicon valley & technology news. Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life

FTX Founder Convicted of Defrauding Cryptocurrency Customers

FTX founder Sam Bankman-Fried’s spectacular rise and fall in the cryptocurrency industry — a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president — hit rock bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion.

After the monthlong trial, jurors rejected Bankman-Fried’s claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world’s second-largest crypto exchange, collapsed into bankruptcy a year ago.

“His crimes caught up to him. His crimes have been exposed,” Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers’ accounts into his “personal piggy bank” as up to $14 billion disappeared.

She urged jurors to reject Bankman-Fried’s insistence when he testified over three days that he never committed fraud or plotted to steal from customers, investors and lenders and didn’t realize his companies were at least $10 billion in debt until October 2022.

Bankman-Fried was required to stand and face the jury as guilty verdicts on all seven counts were read. He kept his hands clasped tightly in front of him. When he sat down after the reading, he kept his head tilted down for several minutes.

After the judge set a sentencing date of March 28, Bankman-Fried’s parents moved to the front row behind him. His father put his arm around his wife. As Bankman-Fried was led out of the courtroom, he looked back and nodded toward his mother, who nodded back and then became emotional, wiping her hand across her face after he left the room.

U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried “perpetrated one of the biggest financial frauds in American history, a multibillion-dollar scheme designed to make him the king of crypto.”

“But here’s the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time, and we have no patience for it,” he said.

Bankman-Fried’s attorney, Mark Cohen, said in a statement they “respect the jury’s decision. But we are very disappointed with the result.”

“Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him,” Cohen said.

The trial attracted intense interest with its focus on fraud on a scale not seen since the 2009 prosecution of Bernard Madoff, whose Ponzi scheme over decades cheated thousands of investors out of about $20 billion. Madoff pleaded guilty and was sentenced to 150 years in prison, where he died in 2021.

The prosecution of Bankman-Fried, 31, put a spotlight on the emerging industry of cryptocurrency and a group of young executives in their 20s who lived together in a $30 million luxury apartment in the Bahamas as they dreamed of becoming the most powerful player in a new financial field.

Prosecutors made sure jurors knew that the defendant they saw in court with short hair and a suit was also the man with big messy hair and shorts that became his trademark appearance after he started his cryptocurrency hedge fund, Alameda Research, in 2017 and FTX, his cryptocurrency exchange, two years later.

They showed the jury pictures of Bankman-Fried sleeping on a private jet, sitting with a deck of cards and mingling at the Super Bowl with celebrities including the singer Katy Perry. Assistant U.S. Attorney Nicolas Roos called Bankman-Fried someone who liked “celebrity chasing.”

In a closing argument, defense lawyer Mark Cohen said prosecutors were trying to turn “Sam into some sort of villain, some sort of monster.”

“It’s both wrong and unfair, and I hope and believe that you have seen that it’s simply not true,” he said. “According to the government, everything Sam ever touched and said was fraudulent.”

The government relied heavily on the testimony of three former members of Bankman-Fried’s inner circle, his top executives including his former girlfriend, Caroline Ellison, to explain how Bankman-Fried used Alameda Research to siphon billions of dollars from customer accounts at FTX.

With that money, prosecutors said, the Massachusetts Institute of Technology graduate gained influence and power through investments, contributions, tens of millions of dollars in political contributions, congressional testimony and a publicity campaign that enlisted celebrities like comedian Larry David and football quarterback Tom Brady.

Ellison, 28, testified that Bankman-Fried directed her while she was chief executive of Alameda Research to commit fraud as he pursued ambitions to lead huge companies, spend money influentially and run for U.S. president someday. She said he thought he had a 5% chance to be U.S. president someday.

Becoming tearful as she described the collapse of the cryptocurrency empire last November, Ellison said the revelations that caused customers collectively to demand their money back, exposing the fraud, brought a “relief that I didn’t have to lie anymore.”

FTX cofounder Gary Wang, who was FTX’s chief technology officer, revealed in his testimony that Bankman-Fried directed him to insert code into FTX’s operations so that Alameda Research could make unlimited withdrawals from FTX and have a credit line of up to $65 billion. Wang said the money came from customers.

Nishad Singh, the former head of engineering at FTX, testified that he felt “blindsided and horrified” at the result of the actions of a man he once admired when he saw the extent of the fraud as the collapse last November left him suicidal.

Ellison, Wang and Singh all pleaded guilty to fraud charges and testified against Bankman-Fried in the hopes of leniency at sentencing.

Bankman-Fried was arrested in the Bahamas in December and extradited to the United States, where he was freed on a $250 million personal recognizance bond with electronic monitoring and a requirement that he remain at the home of his parents in Palo Alto, California.

His communications, including hundreds of phone calls with journalists and internet influencers, along with emails and texts, eventually got him into trouble when the judge concluded he was trying to influence prospective trial witnesses and ordered him jailed in August.

During the trial, prosecutors used Bankman-Fried’s public statements, online announcements and his congressional testimony against him, showing how the entrepreneur repeatedly promised customers that their deposits were safe and secure as late as last Nov. 7 when he tweeted, “FTX is fine. Assets are fine” as customers furiously tried to withdraw their money. He deleted the tweet the next day. FTX filed for bankruptcy four days later.

In his closing, Roos mocked Bankman-Fried’s testimony, saying that under questioning from his lawyer, the defendant’s words were “smooth, like it had been rehearsed a bunch of times?”

But under cross examination, “he was a different person,” the prosecutor said. “Suddenly on cross-examination he couldn’t remember a single detail about his company or what he said publicly. It was uncomfortable to hear. He never said he couldn’t recall during his direct examination, but it happened over 140 times during his cross-examination.”

Former federal prosecutors said the quick verdict — after only half a day of deliberation — showed how well the government tried the case.

“The government tried the case as we expected,” said Joshua A. Naftalis, a partner at Pallas Partners LLP and a former Manhattan prosecutor. “It was a massive fraud, but that doesn’t mean it had to be a complicated fraud, and I think the jury understood that argument.”

World Leaders Agree on Artificial Intelligence Risks

World leaders have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence, at a U.K.-hosted safety conference.

The inaugural AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday, with senior officials from 28 nations, including the United States and China, agreeing to work toward a “shared agreement and responsibility” about AI risks. Plans are in place for further meetings later this year in South Korea and France.

Leaders, including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General Antonio Guterres, discussed each of their individual testing models to ensure the safe growth of AI.

Thursday’s session included focused conversations among what the U.K. called a small group of countries “with shared values.” The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia.

Some leaders, including Sunak, said immediate sweeping regulation is not the way forward, reflecting the view of some AI companies that fear excessive regulation could thwart the technology before it can reach its full potential.

At at a press conference on Thursday, Sunak announced another landmark agreement by countries pledging to “work together on testing the safety of new AI models before they are released.”

The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks.

The summit will conclude with a conversation between Sunak and billionaire Elon Musk. Musk on Wednesday told fellow attendees that legislation on AI could pose risks, and that the best steps forward would be for governments to work to understand AI fully to harness the technology for its positive uses, including uncovering problems that can be brought to the attention of lawmakers.

Some information in this report was taken from The Associated Press and Reuters.

India Probing Phone Hacking Complaints by Opposition Politicians, Minister Says

India’s cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said.

Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that “Apple confirmed it has received the notice for investigation.”

A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized.

There was no immediate comment from Apple about the investigation.

This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi’s government of trying to hack into opposition politicians’ mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: “Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID.”

A senior minister from Modi’s government also said he had received the same notification on his phone.

Apple said it did not attribute the threat notifications to “any specific state-sponsored attacker,” adding that “it’s possible that some Apple threat notifications may be false alarms, or that some attacks are not detected.”

In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi.

The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.

US Pushes for Global Protections for Threats Posed by AI

U.S. Vice President Kamala Harris says leaders have “a moral, ethical and societal duty” to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.

US Pushes for Global Protections Against Threats Posed by AI

U.S. Vice President Kamala Harris said Wednesday that leaders have “a moral, ethical and societal duty” to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap.

Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art.

“To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations,” Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms.”

Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications.

Just days earlier, President Joe Biden – who described AI as “the most consequential technology of our time” – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government.

AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve “initial operational capability.”

And perhaps on the opposite end of the spectrum, some programmer decided to “train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds.”

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as “one of the biggest threats” to society. He called for a “third-party referee.”

Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

“Here we are, for the first time, really in human history, with something that’s going to be far more intelligent than us,” said Musk, who is looking at creating his own generative AI program. “So it’s not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that’s beneficial to humanity. But I do think it’s one of the existential risks that we face and it’s potentially the most pressing one.”

This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year.

“My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways,” he told lawmakers at a Senate Judiciary Committee on May 16.

That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while “AI has been used to do pretty remarkable things” – especially in the field of scientific research – it is limited by its creators.

“It’s not necessarily doing something that humans don’t know how to do, but it’s making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly,” she told VOA on Zoom.

And, she said, “AI is not objective, or all-knowing. There’s been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns.”

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: “The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use.

“It’s OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally,” Brandt said.

Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI.

“The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions,” said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. “Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety.” 

UK Summit Aims to Tackle Thorny Issues Around Cutting-Edge AI Risks 

Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. 

The two-day summit focuses on so-called frontier AI — the latest and most powerful systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet. 

Some 100 people from 28 countries are expected to attend Prime Minister Rishi Sunak’s two-day AI Safety Summit, though the British government has refused to disclose the guest list. 

The event is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI. But Vice President Kamala Harris is due to steal the focus on Wednesday with a separate speech in London setting out the U.S. administration’s more hands-on approach. 

She’s due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak’s governing Conservative Party. 

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity. 

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also expected. 

The meeting is being held at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing. 

One of Sunak’s major goals is to get delegates to agree on a first-ever communique about the nature of AI risks. He said the technology brings new opportunities but warns about frontier AI’s threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction. 

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first. 

In contrast, Harris will stress the need to address the here and now, including “societal harms that are already happening such as bias, discrimination and the proliferation of misinformation.” 

Harris plans to stress that the Biden administration is “committed to hold companies accountable, on behalf of the people, in a way that does not stifle innovation,” including through legislation. 

“As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies,” she plans to say. 

She’ll point to President Biden’s executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest. Among measures she will announce is an AI Safety Institute, run through the Department of Commerce, to help set the rules for “safe and trusted AI.” 

Harris also will encourage other countries to sign up to a U.S.-backed pledge to stick to “responsible and ethical” use of AI for military aims. 

A White House official gave details of Harris’s speech, speaking on condition of anonymity to discuss her remarks in advance. 

UK Kicks Off World’s First AI Safety Summit

The world’s first major summit on artificial intelligence (AI) safety opens in Britain Wednesday, with political and tech leaders set to discuss possible responses to the society-changing technology.

British Prime Minister Rishi Sunak, U.S. Vice President Kamala Harris, EU chief Ursula von der Leyen and U.N. Secretary-General Antonio Guterres will all attend the two-day conference, which will focus on growing fears about the implications of so-called frontier AI.

The release of the latest models has offered a glimpse into the potential of AI, but has also prompted concerns around issues ranging from job losses to cyber-attacks and the control that humans actually have over the systems.

Sunak, whose government initiated the gathering, said in a speech last week that his “ultimate goal” was “to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe before they are released.

“We will push hard to agree the first ever international statement about the nature of these risks,” he added, drawing comparisons to the approach taken to climate change.

But London has reportedly had to scale back its ambitions around ideas such as launching a new regulatory body amid a perceived lack of enthusiasm.

Italian Prime Minister Giorgia Meloni is one of the only world leaders, and only one from the G7, attending the conference.

Elon Musk is due to appear, but it is not clear yet whether he will be physically at the summit in Bletchley Park, north of London, where top British codebreakers cracked Nazi Germany’s “Enigma” code.

‘Talking shop’

While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked.

In his speech, Sunak stressed the need for countries to develop “a shared understanding of the risks that we face.”

But lawyer and investigator Cori Crider, a campaigner for “fair” technology, warned that the summit could be “a bit of a talking shop.

“If he were serious about safety, Rishi Sunak needed to roll deep and bring all of the U.K. majors and regulators in tow and he hasn’t,” she told a press conference in San Francisco.

“Where is the labor regulator looking at whether jobs are being made unsafe or redundant? Where’s the data protection regulator?” she asked.

Having faced criticism for only looking at the risks of AI, the U.K. Wednesday pledged $46 million to fund AI projects around the world, starting in Africa.

Ahead of the meeting, the G7 powers agreed on Monday on a non-binding “code of conduct” for companies developing the most advanced AI systems.

The White House announced its own plan to set safety standards for the deployment of AI that will require companies to submit certain systems to government review.

 

And in Rome, ministers from Italy, Germany and France called for an “innovation-friendly approach” to regulating AI in Europe, as they urged more investment to challenge the U.S. and China.

China will be present, but it is unclear at what level.

News website Politico reported London invited President Xi Jinping, to signify its eagerness for a senior representative.

Beijing’s invitation has raised eyebrows amid heightened tensions with Western nations and accusations of technological espionage. 

 

Biden Signs Sweeping Executive Order on AI Oversight

President Joe Biden on Monday signed a wide-ranging executive order on artificial intelligence, covering topics as varied as national security, consumer privacy, civil rights and commercial competition. The administration heralded the order as taking “vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI.”

The order directs departments and agencies across the U.S. federal government to develop policies aimed at placing guardrails alongside an industry that is developing newer and more powerful systems at a pace rate that has many concerned it will outstrip effective regulation.

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said during a signing ceremony at the White House. The order, he added, is “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” 

‘Red teaming’ for security 

One of the marquee requirements of the new order is that it will require companies developing advanced artificial intelligence systems to conduct rigorous testing of their products to ensure that bad actors cannot use them for nefarious purposes. The process, known as red teaming, will assess, among other things, “AI systems threats to critical infrastructure, as well as chemical, biological, radiological, nuclear and cybersecurity risks.” 

The National Institute of Standards and Technology will set the standards for such testing, and AI companies will be required to report their results to the federal government prior to releasing new products to the public. The Departments of Homeland Security and Energy will be closely involved in the assessment of threats to vital infrastructure. 

To counter the threat that AI will enable the creation and dissemination of false and misleading information, including computer-generated images and “deep fake” videos, the Commerce Department will develop guidance for the creation of standards that will allow computer-generated content to be easily identified, a process commonly called “watermarking.” 

The order directs the White House chief of staff and the National Security Council to develop a set of guidelines for the responsible and ethical use of AI systems by the U.S. national defense and intelligence agencies.

Privacy and civil rights

The order proposes a number of steps meant to increase Americans’ privacy protections when AI systems access information about them. That includes supporting the development of privacy-protecting technologies such as cryptography and creating rules for how government agencies handle data containing citizens’ personally identifiable information.

However, the order also notes that the United States is currently in need of legislation that codifies the kinds of data privacy protections that Americans are entitled to. Currently, the U.S. lags far behind Europe in the development of such rules, and the order calls on Congress to “pass bipartisan data privacy legislation to protect all Americans, especially kids.”

The order recognizes that the algorithms that enable AI to process information and answer users’ questions can themselves be biased in ways that disadvantage members of minority groups and others often subject to discrimination. It therefore calls for the creation of rules and best practices addressing the use of AI in a variety of areas, including the criminal justice system, health care system and housing market.

The order covers several other areas, promising action on protecting Americans whose jobs may be affected by the adoption of AI technology; maintaining the United States’ market leadership in the creation of AI systems; and assuring that the federal government develops and follows rules for its own adoption of AI systems.

Open questions

Experts say that despite the broad sweep of the executive order, much remains unclear about how the Biden administration will approach the regulations of AI in practice.

Benjamin Boudreaux, a policy researcher at the RAND Corporation, told VOA that while it is clear the administration is “trying to really wrap their arms around the full suite of AI challenges and risks,” much work remains to be done.

“The devil is in the details here about what funding and resources go to executive branch agencies to actually enact many of these recommendations, and just what models a lot of the norms and recommendations suggested here will apply to,” Boudreaux said.

International leadership

Looking internationally, the order says the administration will work to take the lead in developing “an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”

James A. Lewis, senior vice president and director of the strategic technologies program at the Center for Strategic and International Studies, told VOA that the executive order does a good job of laying out where the U.S. stands on many important issues related to the global development of AI.

“It hits all the right issues,” Lewis said. “It’s not groundbreaking in a lot of places, but it puts down the marker for companies and other countries as to how the U.S. is going to approach AI.”

That’s important, Lewis said, because the U.S. is likely to play a leading role in the development of the international rules and norms that grow up around the technology.

“Like it or not — and certainly some countries don’t like it — we are the leaders in AI,” Lewis said. “There’s a benefit to being the place where the technology is made when it comes to making the rules, and the U.S. can take advantage of that.”

‘Fighting the last war’ 

Not all experts are certain the Biden administration’s focus is on the real threats that AI might present to consumers and citizens. 

Louis Rosenberg, a 30-year veteran of AI development and the CEO of American tech firm Unanimous AI, told VOA he is concerned the administration may be “fighting the last war.”

“I think it’s great that they’re making a bold statement that this is a very important issue,” Rosenberg said. “It definitely shows that the administration is taking it seriously and that they want to protect the public from AI.”

However, he said, when it comes to consumer protection, the administration seems focused on how AI might be used to advance existing threats to consumers, like fake images and videos and convincing misinformation — things that already exist today.

“When it comes to regulating technology, the government has a track record of underestimating what’s new about the technology,” he said.

Rosenberg said he is more concerned about the new ways in which AI might be used to influence people. For example, he noted that AI systems are being built to interact with people conversationally.

“Very soon, we’re not going to be typing in requests into Google. We’re going to be talking to an interactive AI bot,” Rosenberg said. “AI systems are going to be really effective at persuading, manipulating, potentially even coercing people conversationally on behalf of whomever is directing that AI. This is the new and different threat that did not exist before AI.” 

Musk Pulls Plug on Paying for X Factchecks

Elon Musk has said that corrections to posts on X would no longer be eligible for payment as the social network comes under mounting criticism as becoming a conduit for misinformation.

In the year since taking over Twitter, now rebranded as X, Musk has gutted content moderation, restored accounts of previously banned extremists, and allowed users to purchase account verification, helping them profit from viral — but often inaccurate — posts.

Musk has instead promoted Community Notes, in which X users police the platform, as a tool to combat misinformation. 

But on Sunday, Musk tweeted a modification in how Community Notes works.

“Making a slight change to creator monetization: Any posts that are corrected by @CommunityNotes become ineligible for revenue share,” he wrote.  

“The idea is to maximize the incentive for accuracy over sensationalism,” he added. 

X pays content creators whose work generates lots of views a share of advertising revenue. 

Musk warned against using corrections to make X users ineligible for receiving payouts.

“Worth ‘noting’ that any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source,” he posted.

Musk’s announcement follows the unveiling Friday of a $16-a-month subscription plan that users who pay more get the biggest boost for their replies. Earlier this year it unveiled an $8-a-month plan to get a “verified” account.

A recent study by the disinformation monitoring group NewsGuard found that verified, paying subscribers were the big spreaders of misinformation about the Israel-Hamas war. 

“Nearly three-fourths of the most viral posts on X advancing misinformation about the Israel-Hamas War are being pushed by ‘verified’ X accounts,” the group said.

It said the 250 most-engaged posts that promoted one of 10 prominent false or unsubstantiated narratives related to the war were viewed more than 100 million times globally in just one week. 

NewsGuard said 186 of those posts were made from verified accounts and only 79 had been fact-checked by Community Notes. 

Verified accounts “turned out to be a boon for bad actors sharing misinformation,” said NewsGuard.

“For less than the cost of a movie ticket, they have gained the added credibility associated with the once-prestigious blue checkmark and enabling them to reach a larger audience on the platform,” it said.

While the organization said it found misinformation spreading widely on other social media platforms such as Facebook, Instagram, TikTok and Telegram, it added that it found false narratives about the Israel-Hamas war tend to go viral on X before spreading elsewhere. 

Musk Says Starlink to Provide Connectivity in Gaza

Elon Musk said on Saturday that SpaceX’s Starlink will support communication links in Gaza with “internationally recognized aid organizations.”

A telephone and internet blackout isolated people in the Gaza Strip from the world and from each other on Saturday, with calls to loved ones, ambulances or colleagues elsewhere all but impossible as Israel widened its air and ground assault.

International humanitarian organizations said the blackout, which began on Friday evening, was worsening an already desperate situation by impeding lifesaving operations and preventing them from contacting their staff on the ground.

Following Russia’s February 2022 invasion of Ukraine, Starlink satellites were reported to have been critical to maintaining internet connectivity in some areas despite attempted Russian jamming.

Since then, Musk has said he declined to extend coverage over Russian-occupied Crimea, refusing to allow his satellites to be used for Ukrainian attacks on Russian forces there.

UN Announces Advisory Body on Artificial Intelligence 

The United Nations has begun an effort to help the world manage the risks and benefits of artificial intelligence.

U.N. Secretary-General Antonio Guterres on Thursday launched a 39-member advisory body of tech company executives, government officials and academics from countries spanning six continents.

The panel aims to issue preliminary recommendations on AI governance by the end of the year and finalize them before the U.N. Summit of the Future next September.

“The transformative potential of AI for good is difficult even to grasp,” Guterres said. He pointed to possible uses including predicting crises, improving public health and education, and tackling the climate crisis.

However, he cautioned, “it is already clear that the malicious use of AI could undermine trust in institutions, weaken social cohesion and threaten democracy itself.”

Widespread concern about the risks associated with AI has grown since tech company OpenAI launched ChatGPT last year. Its ease of use has raised concern that the tool could replace writing tasks that previously only humans could perform.

With many calling for regulation of AI, researchers and lawmakers have stressed the need for global cooperation on the matter.

The U.N.’s new body on AI will hold its first meeting Friday.

Some information for this report came from Reuters. 

33 US States Sue Meta, Accusing Platform of Harming Children

Thirty-three U.S. states are suing Meta Platforms Inc., accusing it of damaging young people’s mental health through the addictive nature of their social media platforms.

The suit filed Tuesday in federal court in Oakland, California, alleges Meta knowingly installed addictive features on its social media platforms, Instagram and Facebook, and has collected data on children younger than 13, without their parents’ consent, violating federal law.

“Research has shown that young people’s use of Meta’s social media platforms is associated with depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes,” the complaint says.

The filing comes after Meta’s own research in 2021 found that the company was aware of the damage Instagram can do to teenagers, especially girls.

In Meta’s 2021 study, 13.5% of teen girls said Instagram makes thoughts of suicide worse and 17% of teen girls said it makes eating disorders worse.

Meta responded to the lawsuit by saying it has “already introduced over 30 tools to support teens and their families.”

“We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path,” the company added.

Meta is one of many social media companies facing criticism and legal action, with lawsuits also filed against ByteDance’s TikTok and Google’s YouTube.

Measures to protect children on social media exist, but they are easily circumvented, such as a federal law that bans kids under 13 from setting up accounts.

The dangers of social media for children have been highlighted by U.S. Surgeon General Dr. Vivek Murthy, who said the effects of social media require “immediate action to protect kids now.”

In addition to the 33 states suing, nine more state attorneys general are expected to join and file similar lawsuits.

Some information in this report came from The Associated Press and Reuters. 

Taiwan Computer Chip Workers Adjust to Life in American Desert

Phoenix, Arizona, in America’s Southwest, is the site of a Taiwanese semiconductor chip making facility. One part of President Joe Biden’s cornerstone agenda is to rely less on manufacturing from overseas and boost domestic production of chips that run everything from phones to cars. Many Taiwanese workers who moved to the U.S. to work at the facility — face the challenges of living in a new land. VOA’s Stella Hsu, Enming Liu and Elizabeth Lee have the story.

Governments, Firms Should Spend More on AI Safety, Top Researchers Say

Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper on Tuesday. 

The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks. 

“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics. 

Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.

“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.

“It [investments in AI safety] needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.

Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

“Companies will complain that it’s too hard to satisfy regulations — that ‘regulation stifles innovation’ — that’s ridiculous,” said British computer scientist Stuart Russell.

“There are more regulations on sandwich shops than there are on AI companies.” 

India Conducts Space Flight Test Ahead Of 2025 Crewed Mission

India successfully carried out Saturday the first of a series of key test flights after overcoming a technical glitch ahead of its planned mission to take astronauts into space by 2025, the space agency said.

The test involved launching a module to outer space and bringing it back to earth to test the spacecraft’s crew escape system, said the Indian Space Research Organization chief S. Somanath, and was being recovered after its touchdown in the Bay of Bengal.

The launch was delayed by 45 minutes in the morning because of weather conditions. The attempt was again deferred by more than an hour because of an issue with the engine, and the ground computer put the module’s liftoff on hold, said Somanath.

The glitch caused by a monitoring anomaly in the system was rectified and the test was carried out successfully 75 minutes later from the Sriharikota satellite launching station in southern India, Somanath told reporters.

It would pave the way for other unmanned missions, including sending a robot into space next year.

In September, India successfully launched its first space mission to study the sun, less than two weeks after a successful uncrewed landing near the south pole region of the moon.

After a failed attempt to land on the moon in 2019, India in September joined the United States, the Soviet Union and China as only the fourth country to achieve the milestone.

The successful mission showcased India’s rising standing as a technology and space powerhouse and dovetails with Prime Minister Narendra Modi’s desire to project an image of an ascendant country asserting its place among the global elite.

Signaling a roadmap for India’s future space ambitions, Modi earlier this week announced that India’s space agency will set up an Indian-crafted space station by 2035 and land an Indian astronaut on the moon by 2040.

Active since the 1960s, India has launched satellites for itself and other countries, and successfully put one in orbit around Mars in 2014. India is planning its first mission to the International Space Station next year in collaboration with the United States.

Philippines Orders Military to Stop Using AI Apps Due to Security Risks

The Philippine defense chief has ordered all defense personnel and the 163,000-member military to refrain from using digital applications that harness artificial intelligence to generate personal portraits, saying they could pose security risks.

Defense Secretary Gilberto Teodoro Jr. issued the order in a Saturday memorandum, as Philippine forces have been working to weaken decades-old communist and Muslim insurgencies and defend territorial interests in the disputed South China Sea.

The Department of National Defense on Friday confirmed the authenticity of the memo, which has been circulating online in recent days, but did not provide other details, including what prompted Teodoro to issue the prohibition.

Teodoro specifically warned against the use of a digital app that requires users to submit at least 10 pictures of themselves and then harnesses AI to create “a digital person that mimics how a real individual speaks and moves.” Such apps pose “significant privacy and security risks,” he said.

“This seemingly harmless and amusing AI-powered application can be maliciously used to create fake profiles that can lead to identity theft, social engineering, phishing attacks and other malicious activities,” Teodoro said. “There has already been a report of such a case.”

Teodoro ordered all defense and military personnel “to refrain from using AI photo generator applications and practice vigilance in sharing information online” and said their actions should adhere to the Philippines Defense Department’s values and policies.

Chinese Netizens Post Hate-Filled Comments to Israeli Embassy’s Online Account

After the Hamas attack on Israel, the Israeli Embassy in Beijing began posting on China’s social media platform Weibo. The online effort to gain popular support appears to be backfiring as comments revile the Jewish state, applaud Hamas and praise Adolf Hitler.

The embassy’s account, which has 24 million followers, shows almost 100 posts since the Oct. 7 attack. Some are disturbing, such as an image of a baby’s corpse burnt in the attack. Others suggest Israeli resilience, such as the story of one person who was wounded at the Nova Festival but rescued several other music fans after the attack.

The comment areas have been flooded with hate speech such as “Heroic Hamas, good job!” and “Hitler was wise” referring to the German leader who orchestrated the deaths of 6 million Jews before and during World War II. Many people changed their Weibo avatars to the Israeli flag with a Nazi swastika in the middle.

Occasionally, someone expresses support for Israel and accuses Hamas of being a terrorist group. This triggers strong reactions from other netizens, such as “Only dead Israelis are good Israelis” and “the United States supports Israel, and the friend of the enemy is the enemy.”

Similar commentary has flooded sites elsewhere on China’s heavily censored internet.

VOA Mandarin could not determine how many of the Weibo accounts posting to the Israeli Embassy account belong to people who work for the Chinese government.

The Israeli Embassy in China did not respond to interview requests from VOA Mandarin.

Eric Liu, a former Weibo moderator who is now editor of China Digital Times, told VOA Mandarin the Israeli Embassy “has received more comments recently, which are very straightforwardly hateful, with antisemitic content. They probably have taken the initiative to contain it.”

Liu believes that because the antisemitic remarks remain online, that shows the Chinese government is comfortable with them. China has long backed the Palestinian cause but more recently it has also boosted ties with Israel as it seeks a larger role in trade, technology and diplomacy.

“It’s more of a voice influenced by public opinion,” he said. “Relatively speaking, it is an extreme voice. Moderate voices cannot be heard. Most of the participants are habitual offenders who hate others. But they are also spontaneous, or rather, they are spontaneous under the guidance” of the government censors.

Gu Guoping, a retired Shanghai teacher and human rights citizen-journalist, told VOA Mandarin, “I don’t go to Weibo, WeChat, or QQ. These are all anti-human brainwashing platforms controlled by the Chinese Communist Party. Due to the CCP’s long-term brainwashing and indoctrination of ordinary people, as well as internet censorship, many Weibo users … [confuse] right and wrong.”

“They don’t know Israel at all. The Israeli nation is an amazing, great, humane and civilized nation,” said Gu, who emphasized that Hamas killed innocent people in Israel first, and Israel’s counterattack was legitimate self-defense.

Liu said that Weibo moderators usually must delete hateful comments toward foreign embassies in China. However, they may receive instructions from the Cyberspace Administration of China and the State Council Information Office for major incidents, and different standards may be applied.

VOA Mandarin contacted the Chinese Embassy in Washington, Cyberspace Administration of China and the State Council Information Office for comment but did not receive a reply.

“The government’s opinion has been very, very clear, which is why the online public opinion has such an obvious tendency,” he said. “It must be the all-round propaganda machine that led the public opinion to be like this.”

While calling for a cease-fire in the Israel-Gaza conflict, Chinese officials have refused to condemn Hamas by name. Some observers say Beijing is exploiting the Israel-Hamas war to diminish U.S. influence.

On Saturday, China’s Foreign Minister Wang Yi condemned Israel for going “beyond the scope of self-defense” and called for it to “cease its collective punishment of the people of Gaza.”

When the Iranian Embassy in China posted comments by the Iranian president accusing the United States and Israel of causing the deadly explosion at the Ahli Arab Hospital, Chinese netizens posted their support.

U.S. President Joe Biden said during his visit to Tel Aviv on October 18 that the “intel” provided by his team regarding the hospital attack exonerated Israel. Israel said the militant group Islamic Jihad caused the blast that killed at least 100 people. The militant group that often works with Hamas has denied responsibility. Palestinian officials and several Arab leaders accuse Israel of hitting the hospital amid its ongoing airstrikes in Gaza.

The Weibo accounts of other foreign embassies and diplomats that have posted support for Israel have also been targeted by Chinese netizens. When the Swiss ambassador to China, Jürg Burri, posted on Oct. 13, “I send my deepest condolences to the victims and their families in the terrorist attacks in Gaza,” he was criticized for “pseudo-neutrality.”

“I don’t even want to wear a Swiss watch anymore! So angry,” said one netizen.

Liu believes the netizens’ support for Gaza will change.

“It’s not like that they stand with Palestine,” he said. “Maybe they will hate Palestine tomorrow because they believe in Islam. [The posters] are talking in general terms and do not care about the life and death of Palestine. Hatred of Israelis and Jews is the core.”

EU Opens Disinformation Probes into Meta, TikTok

The EU announced probes Thursday into Facebook owner Meta and TikTok, seeking more details on the measures they have taken to stop the spread of “illegal content and disinformation” after the Hamas attack on Israel.

The European Commission said it had sent formal requests for information to Meta and TikTok respectively in what is a first procedure launched under the EU’s new law on digital content.

The EU launched a similar probe into billionaire mogul Elon Musk’s social media platform X, formerly Twitter, last week.

The commission said the request to Meta related “to the dissemination and amplification of illegal content and disinformation” around the Hamas-Israel conflict.

In a separate statement, it said it wanted to know more about TikTok’s efforts against “the spreading of terrorist and violent content and hate speech”.

The EU’s executive arm added that it wanted more information from Meta on its “mitigation measures to protect the integrity of elections”.

Meta and TikTok have until October 25 to respond, with a deadline of November 8 for less urgent aspects of the demand for information.

The commission said it also sought more details about how TikTok was complying with rules on protecting minors online.

The European Union has built a powerful armory to challenge the power of big tech with its landmark Digital Services Act (DSA) and a sister law, the Digital Markets Act, that hits internet giants with tough new curbs on how they do business.

The EU’s fight against disinformation has intensified since Moscow’s invasion of Ukraine last year and Russian attempts to sway European public opinion.

The issue has gained further urgency after Hamas’ assault on October 7 on Israel and the aftermath which sparked a wave of violent images that flooded the platforms.

The DSA came into effect for “very large” platforms, including Meta and TikTok, that have more than 45 million monthly European users in August.

The DSA bans illegal online content under threat of fines running as high as six percent of a company’s global turnover.

The EU’s top tech enforcer, Thierry Breton, sent warning letters to tech CEOs including Meta’s Mark Zuckerberg, TikTok’s Shou Zi Chew and Sundar Pichai of YouTube owner Alphabet.

Growing EU fears

Breton, EU internal market commissioner, told the executives to crack down on illegal content following Hamas’ attack.

Meta said last week that it was putting special resources towards cracking down on illegal and problematic content related to the Hamas-Israel conflict.

On Wednesday, Breton expressed his fears over the impact of disinformation on the EU.

“The widespread dissemination of illegal content and disinformation… carries a clear risk of stigmatization of certain communities, destabilization of our democratic structures, not to mention the exposure of our children to violent content,” he said.

AFP fact-checkers have found several posts on Facebook, TikTok and X promoting a fake White House document purporting to allocate $8 billion in military assistance to Israel.

And several platforms have had users passing off material from other conflicts, or even from video games, as footage from Israel or Gaza.

Since the EU’s tougher action on digital behemoths, some companies, including Meta, are exploring whether to offer a paid-for version of their services in the European Union.

To Find Out How Wildlife Is Doing, Scientists Try Listening

A reedy pipe and a high-pitched trill duet against the backdrop of a low-pitched insect drone. Their symphony is the sound of a forest and is monitored by scientists to gauge biodiversity.

The recording from the forest in Ecuador is part of new research looking at how artificial intelligence could track animal life in recovering habitats.

When scientists want to measure reforestation, they can survey large tracts of land with tools like satellite and lidar.

But determining how fast and abundantly wildlife is returning to an area presents a more difficult challenge — sometimes requiring an expert to sift through sound recordings and pick out animal calls.

Jorg Muller, a professor and field ornithologist at University of Wurzburg Biocenter, wondered if there was a different way.

“I saw the gap that we need, particularly in the tropics, better methods to quantify the huge diversity… to improve conservation actions,” he told AFP.

He turned to bioacoustics, which uses sound to learn more about animal life and habitats.

It is a long-standing research tool, but more recently is being paired with computer learning to process large amounts of data more quickly.

Muller and his team recorded audio at sites in Ecuador’s Choco region ranging from recently abandoned cacao plantations and pastures to agricultural land recovering from use to old-growth forests.

They first had experts listen to the recordings and pick out birds, mammals and amphibians.

Then, they carried out an acoustic index analysis, which gives a measure of biodiversity based on broad metrics from a soundscape, like volume and frequency of noises.

Finally, they ran two weeks of recordings through an AI-assisted computer program trained to distinguish 75 bird calls.

More recordings needed

The program was able to pick out the calls on which it was trained in a consistent way, but could it correctly identify the relative biodiversity of each location?

To check this, the team used two baselines: one from the experts who listened to the audio recordings, and a second based on insect samples from each location, which offer a proxy for biodiversity.

While the library of available sounds to train the AI model meant it could only identify a quarter of the bird calls the experts could, it was still able to correctly gauge biodiversity levels in each location, the study said.

“Our results show that soundscape analysis is a powerful tool to monitor the recovery of faunal communities in hyperdiverse tropical forest,” said the research published Tuesday in the journal Nature Communications.

“Soundscape diversity can be quantified in a cost-effective and robust way across the full gradient from active agriculture to recovering and old-growth forests,” it added.

There are still shortcomings, including a paucity of animal sounds on which to train AI models.

And the approach can only capture species that announce their presence.

“Of course (there is) no information on plants or silent animals. However, birds and amphibians are very sensitive to ecological integrity, they are a very good surrogate,” Muller told AFP.

He believes the tool could become increasingly useful given the current push for “biodiversity credits” — a way of monetizing the protection of animals in their natural habitat.

“Being able to directly quantify biodiversity, rather than relying on proxies such as growing trees, encourages and allows external assessment of conservation actions, and promotes transparency,” the study said.

 

US Imposes New Chip Export Controls on China

The U.S. Commerce Department on Tuesday tightened its export controls to keep China from acquiring advanced computer chips that it could use to help develop hypersonic missiles and artificial intelligence.

Commerce Secretary Gina Raimondo said the new controls are “intended to protect technologies that have clear national security or human rights implications.”   

The new controls could increase tensions between the United States, the world’s biggest economy, and No. 2 China. In recent talks over several months with high-ranking U.S. officials, Beijing had appealed for “concrete actions” from Washington to improve relations between the two countries, although U.S. officials warned that the new export rules were in the offing.

Raimondo told reporters, “The vast majority of [the sale of] semiconductors [to China] will remain unrestricted. But when we identify national security or human rights threats, we will act decisively and in concert with our allies.”

The Commerce Department said the new restrictions came after consultations with U.S. chip manufacturers and conducting technological analyses.

The new controls allow the monitoring of the sale of chips that could still be used for military aims, even if they might not specifically meet the thresholds for trade limitations. The U.S. said chip exports can also be restricted to companies headquartered in Macao, a Chinese territory, or other countries under a U.S. arms embargo, to prevent them from circumventing the controls and providing chips to China.

The updated restrictions, an expansion of export controls announced last year, also make it more difficult for China to manufacture advanced chips abroad. The list of manufacturing equipment that falls under the export controls has also been expanded, among other changes to the policy.

China protested last year’s export controls, viewing the design and manufacture of high-level semiconductors as essential for its economic growth. Raimondo has said the limits on these chips are not designed to impair China’s economy.  

Chinese government officials are scheduled to go to San Francisco in November for the Asia-Pacific Economic Cooperation summit.

U.S. President Joe Biden has suggested he could meet on the sidelines of the summit with Chinese President Xi Jinping, though a meeting has yet to be confirmed. The two leaders met last year following the Group of 20 summit in Bali, Indonesia, shortly after the export controls were announced.

Some material in this report came from The Associated Press.