Category Archives: Business

economy and business news

Europe Signs Off on New Privacy Pact That Allows People’s Data to Keep Flowing to US 

The European Union signed off Monday on a new agreement over the privacy of people’s personal information that gets pinged across the Atlantic, aiming to ease European concerns about electronic spying by American intelligence agencies.

The EU-U.S. Data Privacy Framework has an adequate level of protection for personal data, the EU’s executive commission said. That means it’s comparable to the 27-nation’s own stringent data protection standards, so companies can use it to move information from Europe to the United States without adding extra security.

U.S. President Joe Biden signed an executive order in October to implement the deal after reaching a preliminary agreement with European Commission President Ursula von der Leyen. Washington and Brussels made an effort to resolve their yearslong battle over the safety of EU citizens’ data that tech companies store in the U.S. after two earlier data transfer agreements were thrown out.

“Personal data can now flow freely and safely from the European Economic Area to the United States without any further conditions or authorizations,” EU Justice Commissioner Didier Reynders said at a press briefing in Brussels.

Washington and Brussels long have clashed over differences between the EU’s stringent data privacy rules and the comparatively lax regime in the U.S., which lacks a federal privacy law. That created uncertainty for tech giants including Google and Facebook parent Meta, raising the prospect that U.S. tech firms might need to keep European data that is used for targeted ads out of the United States.

The European privacy campaigner who triggered legal challenges over the practice, however, dismissed the latest deal. Max Schrems said the new agreement failed to resolve core issues and vowed to challenge it to the EU’s top court.

Schrems kicked off the legal saga by filing a complaint about the handling of his Facebook data after whistleblower Edward Snowden’s revelations a decade ago about how the U.S. government eavesdropped on people’s online data and communications.

Calling the new agreement a copy of the previous one, Schrems said his Vienna-based group, NOYB, was readying a legal challenge and expected the case to be back in the European Court of Justice by the end of the year.

“Just announcing that something is ‘new’, ‘robust’ or ‘effective’ does not cut it before the Court of Justice,” Schrems said. “We would need changes in U.S. surveillance law to make this work — and we simply don’t have it.”

The framework, which takes effect Tuesday, promises strengthened safeguards against data collection abuses and provides multiple avenues for redress.

Under the deal, U.S. intelligence agencies’ access to data is limited to what’s “necessary and proportionate” to protect national security.

Europeans who suspect U.S. authorities have accessed their data will be able to complain to a new Data Protection Review Court, made up of judges appointed from outside the U.S. government. The threshold to file a complaint will be “very low” and won’t require people to prove their data has been accessed, Reynders said.

Business groups welcomed the decision, which clears a legal path for companies to continue cross-border data flows.

“This is a major breakthrough,” said Alexandre Roure, public policy director at the Brussels office of the Computer and Communications Industry Association, whose members include Apple, Google and Meta.

“After waiting for years, companies and organisations of all sizes on both sides of the Atlantic finally have the certainty of a durable legal framework that allows for transfers of personal data from the EU to the United States,” Roure said.

In an echo of Schrems’ original complaint, Meta Platforms was hit in May with a record $1.3 billion EU privacy fine for relying on legal tools deemed invalid to transfer data across the Atlantic.

Meta had warned in its latest earnings report that without a legal basis for data transfers, it would be forced to stop offering its products and services in Europe, “which would materially and adversely affect our business, financial condition, and results of operations.”

Meta’s Twitter Rival Threads Overtakes ChatGPT as Fastest-Growing Platform 

Meta Platforms’ Twitter rival Threads crossed 100 million sign-ups within five days of launch, CEO Mark Zuckerberg said on Monday, dethroning ChatGPT as the fastest-growing online platform to hit the milestone. 

Threads has been setting records for user growth since its launch on Wednesday, with celebrities, politicians and other newsmakers joining the platform seen by analysts as the first serious threat to the Elon Musk-owned microblogging app. 

“That’s mostly organic demand, and we haven’t even turned on many promotions yet,” Zuckerberg said in a Threads post announcing the milestone. 

The app’s sprint to 100 million users was much speedier than that of OpenAI-owned ChatGPT, which became the fastest-growing consumer application in history in January about two months after its launch, according to a UBS study. 

Still, Threads has some catching up to do. Twitter had nearly 240 million monetizable daily active users as of July last year, according to the company’s last public disclosure before Musk’s takeover. 

Twitter has responded to Threads’ arrival by threatening to sue Meta, alleging that the social media behemoth used its trade secrets and other confidential information to build the app. 

That claim, legal experts say, could be hard to prove. 

Threads bears a strong resemblance to Twitter, as do numerous other social media sites that have cropped up in recent months as users have chafed at Musk’s management of the service. It allows posts that are up to 500 characters long and supports links, photos and videos of up to 5 minutes. 

The app also does not yet have a direct messaging function and lacks a desktop version that certain users, such as business organizations, rely on. 

It also currently lacks hashtags and keyword search functions, which limits both its appeal to advertisers and its utility as a place for following real-time events like users frequently do on Twitter. 

Still, analysts said the turmoil at Twitter, including recently imposed limits on the number on tweets users can see, could help Threads to attract users and advertisers.  

Currently, there are no ads on the Threads app and Zuckerberg said the company would only think about monetization once there was a clear path to 1 billion users. 

Instagram head Adam Mosseri said last week Meta was not trying to replace Twitter and that Threads aimed to focus on light subjects like sports, music, fashion and design.  

He acknowledged that politics and hard news are inevitably going to show up on Threads, in what would be a challenge for the app pitching itself as the “friendly” option for public discourse online. 

New Handbook Highlights Ways to Develop Tech Ethically

In a world where technology, such as artificial intelligence, is advancing at a rapid pace, what guidance do technology developers have in making the best ethically sound decisions for consumers? 

A new handbook, titled “Ethics in the Age of Disruptive Technologies: An Operational Roadmap,” promises to give guidance on such issues as the ethical use of AI chatbots like ChatGPT.

The handbook, released June 28, is the first product of the Institute for Technology, Ethics and Culture, or ITEC, the result of a collaboration between Santa Clara University’s Markkula Center for Applied Ethics and the Vatican’s Center for Digital Culture.

The handbook has been in the works for a few years, but the authors said they saw a need to work with a new sense of urgency with the recent escalation of AI usage, following security threats and privacy concerns after the recent release of ChatGPT.     

Enter Father Brendan McGuire.

McGuire worked in the tech industry, serving as executive director of the Personal Computer Memory Card International Association in the early 1990s, before entering the priesthood about 23 years ago. 

McGuire said that over the years, he’s continued to meet with friends from the tech world, many of whom are now leaders in the industry. But, about 10 years ago, their discussions started to get more serious, he said.

“They said, ‘What is coming over the hill with AI, it’s amazing, it’s unbelievable. But it’s also frightening if we go down the wrong valley,'” McGuire said.

“There’s no mechanism to make decisions,” McGuire said, quoting his former colleagues. He then contacted Kirk Hanson, who was then head of the Markkula Center, as well as a local bishop.

“The three of us got together and brainstormed, ‘What could we do?'” McGuire said. “We knew that each of these companies are global companies, so, therefore, they wouldn’t really respect a pastor or a local bishop. I said, if we could get somebody from the Vatican to pay attention, then we could make some traction.”

For McGuire, a Catholic priest, getting guidance from Pope Francis and the Vatican — with its diplomatic, cultural, and spiritual influence — was a natural step. He said he was connected with Bishop Paul Tighe, who was serving as the secretary of the Dicastery for Culture and Education at the Vatican, a department that works for the development of people’s human values.

McGuire said Tighe was asked by Pope Francis to look into further addressing digital and tech ethical issues.

After a few years of informal collaborations, the Markkula Center and the Vatican officially created the ITEC initiative in 2019. 

“We’re co-creators with God when we make these technologies,” he said, recognizing that technology can be used for good or bad purposes.  

The Vatican held a conference in 2019 in Rome called “The Common Good in the Digital Age.” McGuire said about 270 people attended, including Silicon Valley CEOs and experts in robotics, cyberwarfare and security. 

After gathering research by talking with tech leaders, the ITEC team decided to create a practical handbook to help companies think about and question at every level — from inception to creation to implementation — how technology can be used in an ethically positive way.

“Get the people who are designing it. Get the people who are writing code, get the people who are implementing it and not wait for some regulator to say, ‘You can’t do that,'” McGuire said.

These guidelines aren’t just for Catholics, he said. 

One of the handbook’s co-authors, Ann Skeet, senior director of leadership ethics at the Markkula Center, said the handbook is very straightforward and written in a manner business leaders are familiar with. 

“We’ve tried to write in the language of business and engineers so that it’s familiar to them,” Skeet said. “When they pick it up and they go through the five stages, and they see all the checklists and the resources, they actually recognize some of them. … We’ve done our best to make it as usable and practical as possible and as comprehensive as possible.”

“What’s important about this book is it puts materials right in the hands of executives inside the companies so that they can move a little bit past this moment of ‘analysis paralysis’ that we’re in while people are waiting to see what the regulatory environment is going to be like and how that unfolds.” 

In June, the European Parliament passed a draft law called the AI Act, which would restrict uses of facial recognition software and require AI creators to disclose more about the data used to create their programs. 

In the United States, policy ideas have been released by the White House that suggest rules for testing AI systems and protecting privacy rights.

“AI and ChatGPT are the hot topic right now,” Skeet said. “Every decade or so we see a technology come along, whether it’s the internet, social media, the cellphone, that’s somewhat of a game-changer and has its own inherent risks, so you can really apply this work to any technology.”

This handbook comes as leaders in AI are calling for help. In May, Sam Altman of OpenAI stated the need for a new agency to help regulate the powerful systems, and Microsoft President Brad Smith said government needs to “move faster” as AI progresses. 

Google CEO Sundar Pichai has also called for an “AI Pact” of voluntary behavioral standards while awaiting new legislation. 

AI Robots at UN Reckon They Could Run the World Better

A panel of AI-enabled humanoid robots told a United Nations summit Friday that they could eventually run the world better than humans.

But the social robots said they felt humans should proceed with caution when embracing the rapidly developing potential of artificial intelligence.

And they admitted that they cannot — yet — get a proper grip on human emotions.

Some of the most advanced humanoid robots were at the U.N.’s two-day AI for Good Global Summit in Geneva.

They joined around 3,000 experts in the field to try to harness the power of AI — and channel it into being used to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

They were assembled for what was billed as the world’s first news conference with a packed panel of AI-enabled humanoid social robots.

“What a silent tension,” one robot said before the news conference began, reading the room.

Asked about whether they might make better leaders, given humans’ capacity to make errors, Sophia, developed by Hanson Robotics, was clear.

We can achieve great things

“Humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders,” it said.

“We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.

“AI can provide unbiased data while humans can provide the emotional intelligence and creativity to make the best decisions. Together, we can achieve great things.”

The summit is being convened by the U.N.’s ITU tech agency.

ITU chief Doreen Bogdan-Martin warned delegates that AI could end up in a nightmare scenario in which millions of jobs are put at risk and unchecked advances lead to untold social unrest, geopolitical instability and economic disparity.

Ameca, which combines AI with a highly realistic artificial head, said that depended on how AI was deployed.

“We should be cautious but also excited for the potential of these technologies to improve our lives,” the robot said.

Asked whether humans can truly trust the machines, it replied: “Trust is earned, not given… it’s important to build trust through transparency.”

Living until 180?

As the development of AI races ahead, the humanoid robot panel was split on whether there should be global regulation of their capabilities, even though that could limit their potential.

“I don’t believe in limitations, only opportunities,” said Desdemona, who sings in the Jam Galaxy Band.

Robot artist Ai-Da said many people were arguing for AI regulation, “and I agree.”

“We should be cautious about the future development of AI. Urgent discussion is needed now.”

Before the news conference, Ai-Da’s creator Aidan Meller told AFP that regulation was a “big problem” as it was “never going to catch up with the paces that we’re making.”

He said the speed of AI’s advance was “astonishing.”

“AI and biotechnology are working together, and we are on the brink of being able to extend life to 150, 180 years old. And people are not even aware of that,” said Meller.

He reckoned that Ai-Da would eventually be better than human artists.

“Where any skill is involved, computers will be able to do it better,” he said.

Let’s get wild

At the news conference, some robots were not sure when they would hit the big time, but predicted it was coming — while Desdemona said the AI revolution was already upon us.

“My great moment is already here. I’m ready to lead the charge to a better future for all of us… Let’s get wild and make this world our playground,” it said.

Among the things that humanoid robots don’t have yet include a conscience, and the emotions that shape humanity: relief, forgiveness, guilt, grief, pleasure, disappointment, and hurt.

Ai-Da said it was not conscious but understood that feelings were how humans experienced joy and pain.

“Emotions have a deep meaning and they are not just simple… I don’t have that,” it said.

“I can’t experience them like you can. I am glad that I cannot suffer.”

Chinese Regulators Fine Ant Group $985M in Signal That Tech Crackdown May End

HONG KONG — Chinese regulators are fining Ant Group 7.123 billion yuan ($985 million) for violating regulations in its payments and financial services, an indicator that more than two years of scrutiny and crackdown on the firm that led it to scrap its planned public listing may have come to an end.

The People’s Bank of China imposed the fine on the financial technology provider on Friday, stating that Ant had violated laws and regulations related to corporate governance, financial consumer protection, participation in business activities of banking and insurance institutions, payment and settlement business, and attending to anti-money laundering obligations.

The fine comes more than two years after regulators pulled the plug on Ant Group’s $34.5 billion IPO — which would have been the biggest of its time — in 2020. Since then, the company has been ordered to revamp its business and behave more like a financial holding company, as well as rectify unfair competition in its payments business.

“We will comply with the terms of the penalty in all earnestness and sincerity and continue to further enhance our compliance governance,” Ant Group said in a statement.

The move is widely seen as wrapping up Beijing’s probe into the firm and allowing Ant to revive its initial public offering. Chinese gaming firm Tencent, which operates messaging app WeChat, also received a 2.99 billion yuan fine ($414 million) for regulatory violations over its payments services, according to the central bank Friday, signaling that the crackdown on the Chinese technology sector could ease.

Alibaba’s New York-listed stock was up over 9% Friday afternoon.

Ant Group, founded by Alibaba co-founder Jack Ma, first started out as Alipay, a digital payments system aimed at making transactions more secure and trustworthy for buyers and sellers on its Taobao e-commerce platform.

The digital wallet soon grew to become a leading player in the online payments market in China, alongside Tencent’s WeChat Pay. It eventually grew into Ant, Alibaba’s financial arm that also offers wealth management products.

At one point, Ant’s Yu’ebao money-market fund was the largest in the world, but regulators have since ordered Ant to reduce the fund’s balance.

In January, it was announced that Ma would give up control of Ant Group. The move followed other efforts over the years by the Chinese government to rein in Ma and the country’s tech sector more broadly. Two years ago, the once high-profile Ma largely disappeared from view for 2 1/2 months after criticizing China’s regulators.

Yet Ma’s surrender of control came after other signs the government was easing up on Chinese online firms. Late last year Beijing signaled at an economic work conference that it would support technology firms to boost economic growth and create more jobs.

Also in January, the government said it would allow Ant Group to raise $1.5 billion in capital for its consumer finance unit.

Iran Blocks Public Access to Threads App; Raisi’s Account Created

Just one day after its launch, Threads, the latest social media network, was blocked by the Islamic Republic, denying access to the Iranian population. This action occurred even though an account had been created for Iran President Ebrahim Raisi on the platform.

On Thursday afternoon, Raisi’s user account, under the address raisi.ir, was established on Threads. Within a few hours, by Friday noon, he had garnered 27,000 followers. He has yet to make any posts, apparently because the Presidential Office staff administers Raisi’s social media accounts.

As Raisi’s user account debuted on the social media platform, numerous Iranian social media users have voiced concerns regarding restricted access to the platform since Thursday evening. Users have indicated that similar to Instagram, Twitter, and Facebook, they require a VPN or proxy to connect to Threads.  

Journalist Ehsan Bodaghi said on Twitter: “During the election, Mr. Raisi spoke about the importance of people’s online businesses and his 2 million followers on Instagram. After one year, he blocked and filtered all social media platforms, and now, within the initial hours, he has become a member of the social network # Threads, which his own government has filtered. Inconsistency knows no bounds!”

Another journalist, Javad Daliri, posted this on Twitter: “Mr. Raisi and Mr. Ghalibaf raced each other to join the new social network # Threads. As a citizen, I have a question: Can one issue filtering orders and be among the first to break the filtering and join? By the way, was joining this unknown network really your priority?”

Mohammad Bagher Ghalibaf is Speaker of the Parliament of Iran.

Despite the Iranian government’s frequent censorship of social media platforms, officials of the Islamic Republic use these platforms for communication. Notably, Ayatollah Ali Khamenei, the leader of the Islamic Republic of Iran, maintains an active presence on Twitter.

Threads was introduced by Meta, the parent company of Facebook and Instagram. The app was launched late Wednesday. Within two days, Threads has amassed more than 55 million users. The social network shares similarities with Twitter, allowing users to interact with posts through likes and reposts, and nearly doubles the character count limitation imposed by Twitter.

The similarities between Threads and Twitter have sparked a legal dispute between Elon Musk, the owner of Twitter, and Meta’s Mark Zuckerberg. Musk has accused Meta of employing former Twitter engineers and tweeted, “Competition is good, but cheating is not.”

Meta dismissed the copycat allegation, posting on Threads: “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing.”  

Combat Drone Operator Describes Their Many Uses

Ukraine has been using drones for reconnaissance and attacks since the start of Russia’18s invasion. But sometimes combat drone operators use them to save civilians — or even capture the enemy. Anna Kosstutschenko went to the Donbas region to find out more.
Camera: Pavel Suhodolskiy Produced by: Pavel Suhodolskiy

What Is Threads? Questions About Meta’s New Twitter Rival, Answered

Threads, a text-based app built by Meta to rival Twitter, is live.

The app, billed as the text version of Meta’s photo-sharing platform Instagram, became available Wednesday night to users in more than 100 countries — including the U.S., Britain, Australia, Canada and Japan. Despite some early glitches, 30 million people had signed up before noon on Thursday, Meta CEO Mark Zuckerberg said on Threads.

New arrivals to the platform include celebrities like Oprah, pop star Shakira and chef Gordon Ramsay — as well as corporate accounts from Taco Bell, Netflix, Spotify, The Washington Post and other media outlets.

Threads, which Meta says provides “a new, separate space for real-time updates and public conversations,” arrives at a time when many are looking for Twitter alternatives to escape Elon Musk’s raucous oversight of the platform since acquiring it last year for $44 billion. But Meta’s new app has also raised data privacy concerns and is notably unavailable in the European Union.

Here’s what you need to know about Threads.

How Can I Use Threads?

Threads is now available for download in Apple and Google Android app stores for people in more than 100 countries.

Threads was built by the Instagram team, so Instagram users can log into Threads through their Instagram account. Your username and verification status will carry over, according to the platform, but you will also have options to customize other areas of your profile — including whether or not you want to follow the same people that you do on Instagram.

Because Threads and Instagram are so closely linked, it’s also important to be cautious of account deletion. According to Threads’ supplemental privacy policy, you can deactivate your profile at any time, “but your Threads profile can only be deleted by deleting your Instagram account.”

Can I Use Threads If I Don’t Have An Instagram Account?

For now, only Instagram users can create Threads accounts. If you want to access Threads, you will have to sign up for Instagram first.

While this may receive some pushback, VP and research director at Forrester Mike Proulx said making Threads an extension of Instagram was a smart move on Meta’s part.

“It’s piquing [user] curiosity,” Proulx said, noting that Instagram users are getting alerts about their followers joining Threads — causing more and more people to sign up. “That’s one of the reasons why Threads got over 10 million people to sign up in just a seven hour period” after launching.

How Is Threads Similar To Twitter?

Threads’ microblogging experience is very similar to Twitter. Users can repost, reply to or quote a thread, for example, and can see the number of likes and replies that a post has received. “Threads” can run up to 500 characters — compared with Twitter’s 280-character threshold — and can include links, photos and videos up to five minutes long.

In early replies on Threads, Zuckerberg said making the app “a friendly place” will be a key to success — adding that that was “one reason why Twitter never succeeded as much as I think it should have, and we want to do it differently.”

Is Twitter Seeking Legal Action Against Meta?

According to a letter obtained by Semafor on Thursday, Twitter has threatened legal action against Meta over Threads. In the letter, which was addressed to Meta CEO Mark Zuckerberg and dated Wednesday, Alex Spiro, an attorney representing Twitter, accused Meta of unlawfully using Twitter’s trade secrets and other intellectual property by hiring former Twitter employees to create a “copycat” app.

Meta spokesperson Andy Stone responded to the report of Spiro’s letter on Threads Thursday afternoon, writing, “no one on the Threads engineering team is a former Twitter employee.”

Musk hasn’t directly tweeted about the possibility of legal action, but he has replied to several snarky takes on the Threads launch. The Twitter owner responded to one tweet suggesting that Meta’s app was built largely through the use of the copy and paste function, with a laughing emoji.

Twitter CEO Linda Yaccarino has also not publicly commented on Wednesday’s letter, but seemingly appeared to address Threads’ launch in a Thursday tweet — writing that “the Twitter community can never be duplicated.”

Hasn’t This Been Done Before?

The similarities of Meta’s new text-based app suggests the company is working to directly challenge Twitter. The tumultuous ownership has resulted in a series of unpopular changes that have turned off users and advertisers, some of whom are searching for Twitter alternatives.

Threads is the latest Twitter rival to emerge in this landscape following Bluesky, Mastodon and Spill.

How Does Threads Moderate Content?

According to Meta, Threads will use the same safety measures deployed on Instagram — which includes enforcing Instagram’s community guidelines and providing tools to control who can mention or reply to users.

Content warnings — on search queries ranging from conspiracy theory groups to misinformation about COVID-19 vaccinations — also appear to be similar to Instagram.

What Are The Privacy Concerns?

Threads could collect a wide range of personal information — including health, financial, contacts, browsing and search history, location data, purchases and “sensitive info,” according to its data privacy disclosure on the App Store.

Threads also isn’t available in the European Union right now, which has strict data privacy rules.

Meta informed Ireland’s Data Privacy Commission, Meta’s main privacy regulator for the EU, that it has no plans yet to launch Threads in the 27-nation bloc, commission spokesman Graham Doyle said. The company said it is working on rolling the app out to more countries — but pointed to regulatory uncertainty for its decision to hold off on a European launch.

What’s The Future For Threads?

Success for Threads is far from guaranteed. Industry watchers point to Meta’s track record of starting standalone apps that were later shut down — including an Instagram messaging app also called “Threads” that shut down less than two years after its 2019 launch, Proulx notes.

Still, Proulx and others say the new app could be a significant headache for Musk and Twitter.

“The euphoria around a new service and this initial explosion will probably settle down. But it is apparent that this alternative is here to stay and will prove to be a worthy rival given all of Twitter’s woes,” technology analyst Paolo Pescatore of PP Foresight said, noting that combining Twitter-style features with Instagram’s look and feel could drive user engagement.

Threads is in its early days, however, and much depends on user feedback. Pescatore believes the close tie between Instagram and Threads might not resonate with everyone. The rollout of new features will also be key.

 

Meta’s New Twitter Competitor, Threads, Boasts Tens of Millions of Sign-Ups

Tens of millions of people have signed up for Meta’s new app, Threads, as it aims to challenge competitor platform Twitter.

Threads launched on Wednesday in the United States and in more than 100 other countries.

In a Thursday morning post on the platform, Meta CEO Mark Zuckerberg said 30 million people had signed up.

“Feels like the beginning of something special, but we’ve got a lot of work ahead to build out the app,” he said in the post.

Threads is a text-based version of Meta’s social media app Instagram. The company says it provides “a new, separate space for real-time updates and public conversations.”

The high number of sign-ups is likely an indication that users are looking for an alternative to Twitter, which has been stumbling since Elon Musk bought it last year. Meta appears to have taken advantage of rival Twitter’s many blunders in pushing out Threads.

Like Twitter, Threads features short text posts that users can like, re-post and reply to. Posts can be up to 500 characters long and include links, photos and videos that are up to five minutes long, according to a Meta blog post.

Unlike Twitter, Threads does not include any direct message capabilities.

“Let’s do this. Welcome to Threads,” Zuckerberg wrote in his first post on the app, along with a fire emoji. He said the app had 10 million sign-ups in the first seven hours.

Kim Kardashian, Shakira and Jennifer Lopez are among the celebrities who have joined the platform, as well as politicians like Democratic U.S. Representative Alexandria Ocasio-Cortez. Brands like HBO, NPR and Netflix have also set up accounts.

Threads is not yet available in the European Union because of regulatory concerns. The 27-country bloc has stricter privacy rules than most other countries.

Threads launched as a standalone app, but users can log in using their Instagram credentials and follow the same accounts.

Analysts have said Threads’ links to Instagram may provide it with a built-in user base — potentially presenting yet another challenge to beleaguered Twitter. Instagram has more than 2 billion active users per month.

Twitter’s new CEO Linda Yaccarino appeared to respond to the debut of Threads in a Twitter post Thursday.

“We’re often imitated — but the Twitter community can never be duplicated,” she said in the post that did not directly mention Threads.

Some information in this report came from The Associated Press and Reuters.

Indian Court’s Dismissal of Twitter’s Petition Sparks Concerns About Free Online Speech

In India, a recent court judgement that dismissed a legal petition by Twitter challenging the federal government’s orders to block tweets and accounts is a setback for free speech, according to digital rights activists.  

The Karnataka High Court, which delivered its judgement last week, also imposed a fine of $ 61,000 on the social media company for its delay in complying with the government’s takedown orders.  

“The order sets a dangerous precedent for curbing online free speech without employing procedural safeguards that are meant to protect users of online social media platforms,” Radhika Roy, a lawyer and spokesperson for the digital rights organization, Internet Freedom Foundation, told VOA.  

Twitter’s lawsuit filed last year was seen as an effort to push back against strict information technology laws passed in 2021 that allow the government to order the removal of social media posts.  

The government has defended the regulations, saying they are necessary to combat online misinformation in the interest of national security, among other reasons, and says social media companies must be accountable. Critics say the rules enable the government to clamp down on online comments that authorities consider critical.   

In court, Twitter argued that 39 orders of the federal government to take down content went against the law. It is not known which content it referred to, but media reports have said that many of these contained political content and dissenting views against farm laws that sparked a massive farmers protest in 2020.  

The government told the court the content was posted by “anti-India campaigners.” 

The court ruled that the government has the power to block not just tweets, but entire accounts as well.   

“I would disagree with that. The court had an opportunity to ensure that while illegal speech is taken down, free speech for individuals is not restricted,” Nikhil Pahwa, founder of MediaNama, a digital news portal told VOA. “But the court has reiterated that the government has full authority to censor whatever they want and whatever they deem illegal and that is a challenge for free speech in India.”  

The government has welcomed the decision of the Karnataka High Court. “Honourable court upholds our stand. Law of the land must be followed,” Minister of Communications, Electronics & Information Technology, Ashwini Vaishnaw, said in a tweet. 

Twitter had also told the court the grounds for taking down content had not been spelled out by the government and that those whose tweets or accounts were blocked had not been informed. But the court said that the user did not necessarily have to be informed. 

Digital rights activists say this raises concerns because there is no way to ascertain whether the government’s takedown requests are legal.   

“This excessive power (of blocking whole accounts) coupled with the lack of transparency surrounding the blocking orders, spells trouble for any entity whose content has the potential of being deemed unfavourable to the government,” according to Roy.   

Pahwa said the fine imposed by the court on Twitter would also discourage social media companies from going to court to protect their users right to free speech. “We are at a moment of despair for free speech in India. This does not bode well for users who might be critical of the government and its actions and inactions leading up to next year’s general elections,” according to Pahwa. 

Expressing concerns that India is moving towards imposing greater restrictions on online speech, Roy says that “the Karnataka judgement ends up perpetuating the misuse of laws restricting free speech rather than countering its rampant abuse.”  

Last month, Jack Dorsey, who stepped down as chief executive in 2021, said that during his tenure, Twitter had been issued with threats of a shutdown down in India and raids at the homes of its employees if it refused to agree to takedown requests. The government dismissed his comments as an “outright lie.”  

Twitter has said that India ranked fourth among countries that requested removal of content last year — behind Japan, Russia and Turkey. 

India, with an estimated 24 million Twitter users, is one of the largest markets for the social media company.  

Under Elon Musk, the company has complied with takedown orders. Musk, who met Indian Prime Minister Narendra Modi during his visit to the United States last month, has said the company has no choice “but to obey local government laws” in any country or it risks getting shut.  

Twitter Chaos Leaves Door Open for Meta’s Rival App

Elon Musk spent the weekend further alienating Twitter users with more drastic changes to the social media giant, and he is facing a new challenge as tech nemesis Mark Zuckerberg prepares to launch a rival app this week.

Zuckerberg’s Meta group, which owns Facebook, has listed a new app in stores as “Threads, an Instagram app”, available for pre-order in the United States, with a message saying it is “expected” this Thursday.

The two men have clashed for years but a recent comment by a Meta executive suggesting that Twitter was not run “sanely” irked Musk, eventually leading to the two men offering each other out for a cage fight.

Since buying Twitter last year for $44 billion, Musk has fired thousands of employees and charged users $8 a month to have a blue checkmark and a “verified” account.

On the weekend, he limited the posts readers could view and decreed that nobody could look at a tweet unless they were logged in, meaning external links no longer work for many.

He said he needed to fire up extra servers just to cope with the demand as artificial intelligence (AI) companies scraped “extreme levels” of data to train their models.

But commentators have poured scorn on that idea and marketing experts say he has massively alienated both his user base and the advertisers he needs to get profits rolling.

In another move that shocked users, Twitter announced Monday that access to TweetDeck, an app that allows users to monitor several accounts at once, would be limited to verified accounts next month.

John Wihbey, an associate professor of media innovation and technology at Northeastern University, told AFP that plenty of people wanted to quit Twitter for ethical reasons after Musk took over, but he had now given them a technical reason to leave too.

And he added that Musk’s decision to sack thousands of workers meant it had long been expected that the site would become “technically unusable”.

‘Remarkably bad’

Musk has said he wants to make Twitter less reliant on advertising and boost income from subscriptions.

Yet he chose advertising specialist Linda Yaccarino as his chief executive recently, and she has spoken of going into “hand-to-hand combat” to win back advertisers.

“How do you tell Twitter advertisers that your most engaged free users potentially will never see their ads because of data caps on their usage,” tweeted Justin Taylor, a former marketing executive at Twitter.

Mike Proulx, vice president at market research firm Forrester, said the weekend’s chaos had been “remarkably bad” for both users and advertisers.

“Advertisers depend on reach and engagement yet Twitter is currently decimating both,” he told AFP.

He said Twitter had “moved from stable to startup” and Yaccarino, who remained silent over the weekend, would struggle to restore its credibility, leaving the door open to Twitter’s rivals to suck up any cash from advertisers.

‘Open secret’

The technical reasons Musk gave for limiting the views of users immediately brought a backlash.

Many social media users speculated that Musk had simply failed to pay the bill for his servers.

French social data analyst Florent Lefebvre said AI firms were more likely to train their models on books and media articles than social network content, which “is of much poorer quality, full of mistakes and lacking in context”.

Yoel Roth, who stepped down as Twitter’s head of security weeks after Musk took over, said the idea that data scraping had caused such performance problems that users needed to be forced to log in “doesn’t pass the sniff test”.

“Scraping was the open secret of Twitter data access,” he wrote on the Bluesky social network — another Twitter rival.

“We knew about it. It was fine.”

 

Sweden Orders Four Companies to Stop Using Google Tool

STOCKHOLM, SWEDEN — Sweden on Monday ordered four companies to stop using a Google tool that measures and analyzes web traffic, as doing so transfers personal data to the United States. One company was fined the equivalent of more than $1.1 million. 

Sweden’s privacy protection agency, the IMY, said it had examined the use of Google Analytics by the firms following a complaint by the Austrian data privacy group NOYB (none of your business), which has filed dozens of complaints against Google across Europe. 

NOYB asserted that the use of Google Analytics for web statistics by the companies resulted in the transfer of European data to the United States in violation of the EU’s data protection regulation, the GDPR. 

The GDPR allows the transfer of data to third countries only if the European Commission has determined they offer at least the same level of privacy protection as the EU. A 2020 EU Court of Justice ruling struck down an EU-U.S. data transfer deal as being insufficient. 

The IMY said it considers the data sent to Google Analytics in the United States by the four companies to be personal data and that “the technical security measures that the companies have taken are not sufficient to ensure a level of protection that essentially corresponds to that guaranteed within the EU.” 

It fined telecommunications firm Tele2 $1.1 million and online marketplace CDON $27,700.  

Grocery store chain Coop and Dagens Industri newspaper had taken more measures to protect the data being transferred and were not fined. 

Tele2 had stopped using Google Analytics of its own volition, and the IMY ordered the other companies to stop using it. 

IMY legal adviser Sandra Arvidsson, who led the investigation, said the agency has the rulings “made clear what requirements are placed on technical security measures and other measures when transferring personal data to a third country, in this case the United States.’ 

NYOB welcomed the IMY’s ruling. 

“Although many other European authorities (e.g., Austria, France and Italy) already found that the use of Google Analytics violates the GDPR, this is the first financial penalty imposed on companies for using Google Analytics,” it said in a statement. 

At the end of May, the European Commission said it hoped to conclude by the end of the summer a new legal framework for data transfers between the EU and the United States. 

The RGPD, in place since 2018, can lead to penalties of up to $21.8 million, or 4% of a company’s global revenue. 

In US, 5G Wireless Signals Could Disrupt Flights Starting This Weekend

Airline passengers who have endured tens of thousands of weather-related flight delays this week could face a new source of disruptions starting Saturday, when wireless providers are expected to power up new 5G systems near major airports.

Aviation groups have warned for years that 5G signals could interfere with aircraft equipment, especially devices using radio waves to measure distance from the ground and which are critical when planes land in low visibility.

Predictions that interference would cause massive flight groundings failed to come true last year, when telecom companies began rolling out the new service. They then agreed to limit the power of the signals around busy airports, giving airlines an extra year to upgrade their planes.

The leader of the nation’s largest pilots’ union said crews will be able to handle the impact of 5G, but he criticized the way the wireless licenses were granted, saying it had added unnecessary risk to aviation.

Transportation Secretary Pete Buttigieg recently told airlines that flights could be disrupted because a small portion of the nation’s fleet has not been upgraded to protect against radio interference.

Most of the major U.S. airlines say they are ready. American, Southwest, Alaska, Frontier and United say all of their planes have height-measuring devices, called radio altimeters, that are protected against 5G interference.

The big exception is Delta Air Lines. Delta says 190 of its planes, which include most of its smaller ones, still lack upgraded altimeters because its supplier has been unable to provide them fast enough.

The airline does not expect to cancel any flights because of the issue, Delta said Friday. The airline plans to route the 190 planes carefully to limit the risk of canceling flights or forcing planes to divert from airports where visibility is low because of fog or low clouds.

The Delta planes that have not been retrofitted include several models of Airbus jets. The airline’s Boeing jets have upgraded altimeters, as do all Delta Connection planes, which are operated by Endeavor Air, Republic Airways and SkyWest Airlines, the airline said.

JetBlue did not respond to requests for comment but told The Wall Street Journal it expected to retrofit 17 smaller Airbus jets by October, with possible “limited impact” some days in Boston.

Wireless carriers including Verizon and AT&T use a part of the radio spectrum called C-Band, which is close to frequencies used by radio altimeters, for their new 5G service. The Federal Communications Commission granted them licenses for the C-Band spectrum and dismissed any risk of interference, saying there was ample buffer between C-Band and altimeter frequencies.

When the Federal Aviation Administration sided with airlines and objected, the wireless companies pushed back the rollout of their new service. In a compromise brokered by the Biden administration, the wireless carriers then agreed not to power up 5G signals near about 50 busy airports. That postponement ends Saturday.

AT&T declined to comment. Verizon did not immediately respond to a question about its plans.

Buttigieg reminded the head of trade group Airlines for America about the deadline in a letter last week, warning that only planes with retrofitted altimeters would be allowed to land under low-visibility conditions. He said more than 80% of the U.S. fleet had been retrofitted, but a significant number of planes, including many operated by foreign airlines, have not been upgraded.

“This means on bad-weather, low-visibility days in particular, there could be increased delays and cancelations,” Buttigieg wrote. He said airlines with planes awaiting retrofitting should adjust their schedules to avoid stranding passengers.

Airlines say the FAA was slow to approve standards for upgrading the radio altimeters and supply-chain problems have made it difficult for manufacturers to produce enough of the devices. Nicholas Calio, head of the Airlines for America, complained about a rush to modify planes “amid pressure from the telecommunications companies.”

Jason Ambrosi, a Delta pilot and president of the Air Line Pilots Association, accused the FCC of granting 5G licenses without consulting aviation interests, which he said “has left the safest aviation system in the world at increased risk.” But, he said, “Ultimately, we will be able to address the impacts of 5G.”

In AI Tussle, Twitter Restricts Number of Posts Users Can Read

Elon Musk announced Saturday that Twitter would temporarily restrict how many tweets users could read per day, in a move meant to tamp down on the use of the site’s data by artificial intelligence companies. 

The platform is limiting verified accounts to reading 6,000 tweets a day. Non-verified users — the free accounts that make up the majority of users — are limited to reading 600 tweets per day.  

New unverified accounts would be limited to 300 tweets. 

The decision was made “to address extreme levels of data scraping” and “system manipulation” by third-party platforms, Musk said in a tweet Saturday afternoon, as some users quickly hit their limits. 

“Goodbye Twitter” was a trending topic in the United States following Musk’s announcement. 

Twitter would soon raise the ceiling to 8,000 tweets per day for verified accounts, 800 for unverified accounts and 400 for new unverified accounts, Musk said. 

Twitter’s billionaire owner did not give a timeline for how long the measures would be in place.  

The day before, Musk had announced that it would no longer be possible to read tweets on the site without an account. 

Much of the data scraping was coming from firms using it to build their AI models, Musk said, to the point that it was causing traffic issues with the site. 

In creating AI that can respond in a human-like capacity, many companies feed them examples of real-life conversations from social media sites. 

“Several hundred organizations (maybe more) were scraping Twitter data extremely aggressively, to the point where it was affecting the real user experience,” Musk said.  

“Almost every company doing AI, from startups to some of the biggest corporations on Earth, was scraping vast amounts of data,” he said.  

“It is rather galling to have to bring large numbers of servers online on an emergency basis just to facilitate some AI startup’s outrageous valuation,” he said. 

Twitter is not the only social media giant to have to wrangle with the rapid acceleration of the AI sector. 

In mid-June, Reddit raised prices on third-party developers that were using its data and sweeping up conversations posted on its forums. 

It proved a controversial move, as many regular users also accessed the site via third-party platforms and marked a shift from previous arrangements where social media data had generally been provided for free or a small charge.

FBI Turning to Social Media to Track Traitors

If you logged onto social media over the past few months, you may have seen it – a video of the Russian Embassy on a gray, overcast day in Washington with the sounds of passing cars and buses in the background.

A man’s voice asks in English, “Do you want to change your future?” Russian subtitles appear on the bottom of the screen and the narrator makes note of the first anniversary of “Russia’s further invasion of Ukraine.”

As somber music begins to play, the camera pans to the left and takes the viewer down Wisconsin Avenue, to the Adams Morgan Metro station and on through Washington, ending at FBI headquarters, a few blocks from the White House.

“The FBI values you. The FBI can help you,” FBI Assistant Director Alan Koehler says as the video wraps up, Russian subtitles still appearing on the screen. “But only you have the power to take the first step.”

The video, put out by the FBI’s Washington Field Office, first appeared as a posting on the field office’s Twitter account on February 24. Another five versions started the same day as paid advertisements on Facebook and Instagram, costing the bureau an estimated $5,500 to $6,500.

That money may seem like a pittance for a government agency with an annual budget of more than $10 billion, but it was not the first nor the last time the FBI spent money to court Russian officials.

The video is part of an expansive, long-running campaign by the FBI to use social media advertisements to recruit disgruntled Russian officials stationed across the United States and beyond, in part to sniff out Americans who have betrayed their country in order to aid Moscow.

A VOA analysis finds the FBI has paid tens of thousands of dollars, at minimum, to multiple platforms for social media ads targeting Russian officials, with the pace of such ad buys increasing just before and then after Moscow launched its latest invasion of Ukraine.

Multiple former U.S. counterintelligence officials who spoke to VOA about the FBI’s efforts described the advertising as money well spent.

The FBI wants to find well-placed Russian officials who can “help identify where American spies may be,” said Douglas London, a three-decade veteran of the CIA’s Clandestine Service.

“It seeks Russian agents to catch and convict American spies and Russian illegals,” he told VOA, describing the mission as a part of the bureau’s DNA.

Another veteran CIA official, Jim Olson, agreed, telling VOA the goal of the FBI’s outreach to Russian officials is unmistakable.

“I call that hanging out the shingle,” said Olson, a former counterintelligence chief.

“For every American traitor, every American spy, there are members of that intelligence service who know the identity of that American or know enough about what the production is to give us a lead in doing the identification,” Olson said.

‘All available tools’

The FBI declined to comment directly on its decision to spend several thousand dollars to run the two-minute-long video as an ad on Facebook and Instagram, simply saying it “uses a variety of means” to gather intelligence.

“The FBI will evaluate all available tools to protect the national security interests of the United States,” the FBI’s Washington Field Office told VOA in an email. “And we will use all legal means available to locate individuals with information that can help protect the United States from threats to our national security.”

Some of the FBI’s earlier forays into social media advertising did get some public attention, first in October 2019 and then again in March of last year.

However, a review of publicly available data indicates the bureau’s use of social media for counterintelligence is more expansive than previously understood.

According to data in the Meta Ad Library, which contains information on Facebook and Instagram ads dating back to May 2018, the FBI and its field offices have so far spent just under $40,000 on ads targeting Russian speakers, generating as many as 6.9 million views.

While most of the ads targeted specific locations, like Washington and New York, some were seen much further afield, getting views across much of the United States and even in countries like Spain, Poland, Nigeria, France and Croatia.

It would also appear the FBI’s paid ads ran on platforms other than Meta.

Nicholas Murphy, a 20-year-old second-year student at Georgetown University in Washington, was in his dorm room last March searching for news about Russia’s invasion of Ukraine when he saw an ad on YouTube, the video-sharing social media platform owned by Google.

“[It was] just text with a kind of a strange like background to it … all in Russian,” said Murphy, a Park City, Utah, native who does not speak Russian and who used a translator app to decipher the ad.

“At the time I didn’t know if it was coming from the Russian government, if it was coming from our government, if it was kind of propaganda, if it was fake,” Murphy told VOA. “It conjured up a lot of thoughts about Russian influence over Facebook ads in the [2016 U.S.] election.”

Murphy said he came across the ad another two to three times over the ensuing weeks. And, it turned out, he was not alone. A handful of other students were also starting to see some of the ads, including a couple of classmates in a Russian literature class.

Just how many ads the FBI paid to run on YouTube, or via Google, is unclear.

A search of Google’s recently launched Ad Transparency Center shows the FBI paid to run the Russian language version of its two-minute-long video most recently on April 28. But the database only shows information for the past 30 days and Google says it does not share information on advertiser spending.

It is also unclear whether the FBI paid to run any ads on Twitter in addition to pushing out information through its own Twitter accounts. Twitter responded to an email from VOA requesting information with its now standard poop emoji.

The FBI itself refused to provide details regarding the scope of its social media advertising efforts although the Washington Field Office did acknowledge to VOA via email that it uses “various social media platforms.”

The Washington Field Office also defended its use of social media advertising despite indications that the ads themselves, like the one seen by Georgetown University student Nicholas Murphy, do not always reach the intended audience.

“The FBI views these efforts as productive and cost effective,” the FBI’s Washington Field Office told VOA. The office declined to be more specific about whether any spies have been identified as a result of the ads.

“Russia has long been a counterintelligence threat to the U.S. and the FBI will continue to adapt our investigative and outreach techniques to counter that threat and others,” it said. “We will use all legal means available to locate individuals with information that can help protect the United States from threats to our national security.”

The Russian Embassy in Washington did not respond to calls or emails from VOA seeking comment about the FBI’s use of social media advertisements to target Russian officials in the U.S. But Russian Ambassador Anatoly Antonov did respond to a March 2022 article by The Washington Post about FBI efforts to send ads to cell phones outside the Russian Embassy in Washington.

“Attempts to sow confusion and organize desertion among the staff of @RusEmbUSA are ridiculous,” Antonov was quoted as saying in a tweet by the embassy’s Twitter account.

Some former U.S. counterintelligence officials, though, argue Russia has reason to be worried.

“I think people will come out of the woodwork,” said Olson, the former CIA counterintelligence chief.

FBI agents “see what we all see, and that is that there must be a subset of Russian intelligence officers, SVR officers, GRU officers, who are disillusioned by what’s going on,” he told VOA.

“I think some good Russians are embarrassed, shocked, ashamed of what Putin is doing in Ukraine, killing brother and sister Slavs. And I think that there will be people who would like to strike back against that.”

London, the longtime CIA Clandestine Services official and author of The Recruiter: Spying and the Lost Art of American Intelligence, likewise believes the FBI’s persistent efforts to reach disgruntled Russians on social media will pay off.

“Generally, the Russians who have worked with us have done so out of patriotism … they were upset with the government,” he said.

And the Russian officials that the FBI hopes to reach just need a bit of nudge.

“They’re aiming this at Russians who are already there mentally but just haven’t crossed,” London said, adding it is not a coincidence that many of the FBI ads show Russians exactly how to get in touch, whether via encrypted communication apps like Signal or by walking right up to the bureau’s front door.

“They’re not doing metaphors here,” he said. “They don’t want anything subject to interpretation.”

Even the language used by the FBI appears to be designed to build trust.

“It’s very much not native,” according to Bradley Gorski, with Georgetown University’s Department of Slavic Languages.

But given the overall quality of the language in the ads, Gorski said it is quite possible all of it is intentional.

“It might be a canny strategy on their part,” he said of the FBI. “If they are reaching out to Russian speakers and want to both communicate with them but let them know who is communicating with them is not a Russian speaker, but is a sort of American doing their best, then this kind of outreach with a little bit stilted, though correct, Russian might communicate that actually better than fully native sort of fluent speech.”

Whether the FBI’s spending on social media advertisements is achieving the desired results is hard to gauge. Public metrics such those provided by social media companies like Meta can give a sense of how many people are seeing the ads, and where they are, but do not shed much light on who is ultimately interacting with the ads to the point of a response.

When pressed, FBI officials tell VOA only that the bureau views the ad campaigns as productive.

Others agree.

“Relative to the hardcore military aid the U.S. has provided, that’s a small chunk of change,” said Jason Blazakis, a senior research fellow at The Soufan Center, a global intelligence firm.

And Blazakis, who also directs the Center on Terrorism, Extremism, and Counterterrorism at the Middlebury Institute of International Studies, thinks the FBI’s social media ads might be having an impact even if few Russian officials ever come forward with information.

“Part of it is also messaging to the broader Russian public,” he told VOA, pointing to Russia’s invasion of Ukraine. “There is this influence operational component to it, part of this PR [public relations] battle that is happening on the periphery of the conflict.”

Chipmaker TSMC Says Supplier Was Targeted in Cyberattack

Taiwan Semiconductor Manufacturing Co. said Friday that a cybersecurity incident involving one of its IT hardware suppliers has led to the leak of the vendor’s company data. 

“TSMC has recently been aware that one of our IT hardware suppliers experienced a cybersecurity incident that led to the leak of information pertinent to server initial setup and configuration,” the company said. 

TMSC confirmed in a statement to Reuters that its business operations or customer information were not affected following the cybersecurity incident at its supplier Kinmax. 

The TSMC vendor breach is part of a larger trend of significant security incidents affecting various companies and government entities. 

Victims range from U.S. government departments to the UK’s telecom regulator to energy giant Shell, all affected since a security flaw was discovered in Progress Software’s MOVEit Transfer product last month. 

TSMC said it has cut off data exchange with the affected supplier following the incident.

Chinese, Russian Firms to Build Lithium Plants in Bolivia

LA PAZ, BOLIVIA – Chinese and Russian companies will invest more than $1.4 billion in the extraction of lithium in Bolivia, one of the countries with the largest reserves of the mineral used in electric car batteries, the government in La Paz said Friday.  

China’s Citic Guoan and Russia’s Uranium One Group — both with a major government stake — will partner with Bolivia’s state-owned YLB to build two lithium carbonate processing plants, Bolivian President Luis Arce said at a public event.  

Lithium is often described as the “white gold” of the clean-energy revolution, a highly coveted component of mobile phones and electric car batteries.  

“We are consolidating the country’s industrialization process,” Arce said.

Bolivia, which claims to have the world’s largest deposits, in January also signed an agreement with Chinese consortium CBC to build two lithium battery plants.  

The country’s energy ministry said in a statement that each of the two new plants would have the capacity to produce up to 25,000 metric tons of lithium carbonate per year.  

Construction will begin in about three months.  

China and Russia are among Bolivia’s main lithium buyers.  

Lithium is mostly mined in Australia and South America. 

Meta Oversight Board Urges Cambodia Prime Minister’s Suspension from Facebook

Meta Platforms’ Oversight Board on Thursday called for the suspension of Cambodian Prime Minister Hun Sen for six months, saying a video posted on his Facebook page had violated Meta’s rules against violent threats.

The board, which is funded by Meta but operates independently, said the company erred in leaving up the video and ordered its removal from Facebook.

Meta, in a written statement, agreed to take down the video but said it would respond to the recommendation to suspend Hun Sen after a review.

A suspension would silence the prime minister’s Facebook page less than a month before an election in Cambodia, although critics say the poll will be a sham due to Hun Sen’s autocratic rule.

The decision is the latest in a series of rebukes by the Oversight Board over how the world’s biggest social media company handles rule-breaking political leaders and incitement to violence around elections.

The company’s election integrity efforts are in focus as the United States prepares for presidential elections next year.

The board endorsed Meta’s 2021 banishment of former U.S. President Donald Trump – the current front-runner for the 2024 Republican presidential nomination – after the deadly Jan. 6 Capitol Hill riot, but criticized the indefinite nature of his suspension and urged more careful preparation for volatile political situations overall.

Meta reinstated the former U.S. president earlier this year.  

Last week, the board said Meta’s handling of calls for violence after the 2022 Brazilian election continued to raise concerns about the effectiveness of its election efforts.

Hun Sen’s video, broadcast on his official Facebook page in January, showed the prime minister threatening to beat up political rivals and send “gangsters” to their homes, according to the board’s ruling.

Meta determined at the time that the video fell afoul of its rules, but opted to leave it up under a “newsworthiness” exemption, reasoning that the public had an interest in hearing warnings of violence by their government, the ruling said.

The board held that the video’s harms outweighed its news value.

‘Godfather of AI’ Urges Governments to Stop Machine Takeover

Geoffrey Hinton, one of the so-called godfathers of artificial intelligence, on Wednesday urged governments to step in and make sure that machines do not take control of society.

Hinton made headlines in May when he announced he had quit Google after a decade of work to speak more freely on the dangers of AI, shortly after the release of ChatGPT captured the imagination of the world.

The highly respected AI scientist, who is based at the University of Toronto, was speaking to a packed audience at the Collision tech conference in the Canadian city.

The conference brought together more than 30,000 startup founders, investors and tech workers, most looking to learn how to ride the AI wave and not hear a lesson on its dangers.

“Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away,” Hinton said.

“Right now there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking over and maybe you want to be more balanced,” he said.

AI could deepen inequality, says Hinton

Hinton warned that the risks of AI should be taken seriously despite his critics who believe he is overplaying the risks.

“I think it’s important that people understand that this is not science fiction, this is not just fearmongering,” he insisted. “It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”

Hinton also expressed concern that AI would deepen inequality, with the massive productivity gain from its deployment going to the benefit of the rich, and not workers.

“The wealth isn’t going to go to the people doing the work. It is going to go into making the rich richer and not the poorer and that’s very bad for society,” he added.

He also pointed to the danger of fake news created by ChatGPT-style bots and said he hoped that AI-generated content could be marked in a way similar to how central banks watermark cash money.

“It’s very important to try, for example, to mark everything that is fake as fake. Whether we can do that technically, I don’t know,” he said.

The European Union is considering such a technique in its AI Act, a legislation that will set the rules for AI in Europe, which is being negotiated by lawmakers.

‘Overpopulation on Mars’

Hinton’s list of AI dangers contrasted with conference discussions that were less over safety and threats, and more about seizing the opportunity created in the wake of ChatGPT.

Venture Capitalist Sarah Guo said doom and gloom talk of AI as an existential threat was premature and compared it to “talking about overpopulation on Mars,” quoting another AI guru, Andrew Ng.

She also warned against “regulatory capture” that would see government intervention protect the incumbents before it had a chance to benefit sectors such as health, education or science.

Opinions differed on whether the current generative AI giants, mainly Microsoft backed OpenAI and Google, would remain unmatched or whether new actors will expand the field with their own models and innovations.

“In five years, I still imagine that if you want to go and find the best, most accurate, most advanced general model, you’re probably going to still have to go to one of the few companies that have the capital to do it,” said Leigh Marie Braswell of venture capital firm Kleiner Perkins.

Zachary Bratun-Glennon of Gradient Ventures said he foresaw a future where “there are going to be millions of models across a network much like we have a network of websites today.”

Cambodia’s Hun Sen Leaves Facebook for Telegram 

PHNOM PENH, CAMBODIA — Cambodian Prime Minister Hun Sen, a devoted and very active user of Facebook — on which he has posted everything from photos of his grandchildren to threats against his political enemies — said Wednesday that he would no longer upload to the platform and would instead depend on the Telegram app to get his messages across. 

Telegram is a popular messaging app that also has a blogging tool called “channels.” In Russia and some neighboring countries, it is actively used both by government officials and opposition activists for communicating with mass audiences. Telegram played an important role in coordinating unprecedented anti-government protests in Belarus in 2020, and it currently serves as a major source of news about Russia’s war in Ukraine. 

Hun Sen, 70, who has led Cambodia for 38 years, is listed as having 14 million Facebook followers, though critics have suggested a large number of them are merely “ghost” accounts purchased in bulk from so-called “click farms,” an assertion the long-serving prime minister has repeatedly denied. The Facebook accounts of Joe Biden and Donald Trump by comparison boast 11 million and 34 million followers, respectively, though the United States has about 20 times the population of Cambodia. 

Hun Sen officially launched his Facebook page on September 20, 2015, after his fierce political rival, opposition leader Sam Rainsy, effectively demonstrated how it could be used to mobilize support. Hun Sen is noted as a canny and sometimes ruthless politician, and he has since then managed to drive his rival into exile and neutralize all his challengers, even though Cambodia is a nominally democratic state. 

Controversial remarks

Hun Sen said he was giving up Facebook for Telegram because he believed the latter would be more effective for communicating. In a Telegram post on Wednesday, he said it would be easier for him to get his message out when traveling in other countries that officially ban Facebook use, such as China, the top ally of his government. Hun Sen has 855,000 followers so far on Telegram, where he appears to have started posting in mid-May. 

It is possible, however, that Hun Sen’s social media loyalty switch has to do with controversy about remarks he posted earlier this year on Facebook that in theory could see him get at least temporarily banned from the platform very soon. 

In January, speaking at a road construction ceremony, he decried opposition politicians who accused his ruling Cambodian People’s Party of stealing votes. 

“There are only two options. One is to use legal means and the other is to use a stick,” the prime minister said. “Either you face legal action in court, or I rally [the Cambodian] People’s Party people for a demonstration and beat you up.” His remarks were spoken on Facebook Live and kept online as a video. 

Perhaps because of heightened consciousness about the power of social media to inflame and trigger violence in such countries as India and Myanmar, and because the remarks were made ahead of a general election in Cambodia this July, complaints about his words were lodged with Facebook’s parent company, Meta. 

Facebook’s moderators declined to recommend action against Hun Sen, judging that his position as a national leader made his remarks newsworthy and therefore not subject to punishment despite their provocative nature. 

However, the case was forwarded in March to Meta’s Oversight Board, a group of independent experts that is empowered to render an overriding judgment that could limit Hun Sen’s Facebook activities. They are expected to issue a decision on Thursday. The case is being closely watched as an indicator of where Facebook will draw the line in countries with volatile political situations. 

Hun Sen said his Facebook account would remain online, but that he would no longer actively post to it. He urged people looking for news from him to check YouTube and his Instagram account as well as Telegram and said he had ordered his office to establish a TikTok account to allow him to communicate with his country’s youth.

White House Expanding Affordable High-Speed Internet Access

U.S. President Joe Biden says he wants to make sure that every American has access to high-speed internet. VOA’s Julie Taboh has our story about the United States’ more than $40 billion-dollar investment to expand the service. (Videographer:  Adam Greenbaum; Produced by Julie Taboh, Adam Greenbaum)    

Generative AI Might Make It Easier to Target Journalists, Researchers Say

Since the artificial intelligence chatbot ChatGPT launched last fall, a torrent of think pieces and news reports about the ins and outs and ups and downs of generative artificial intelligence has flowed, stoking fears of a dystopian future in which robots take over the world.  

While much of that hype is indeed just hype, a new report has identified immediate risks posed by apps like ChatGPT. Some of those present distinct challenges to journalists and the news industry.  

Published Wednesday by New York University’s Stern Center for Business and Human Rights, the report identified eight risks related to generative artificial intelligence, or AI, including disinformation, cyberattacks, privacy violations and the decay of the news industry.  

The AI debate “is getting a little confused between concerns about existential dangers versus what immediate harms generative AI might entail,” the report’s co-author Paul Barrett told VOA. “We shouldn’t get paralyzed by the question of, ‘Oh my God, will this technology lead to killer robots that are going to destroy humanity?'” 

The systems being released right now are not going to lead to that nightmarish outcome, explained Barrett, who is the deputy director of the Stern Center.  

Instead, the report — which Barrett co-authored with Justin Hendrix, founder and editor of the media nonprofit Tech Policy Press — argues that lawmakers, regulators and the AI industry itself should prioritize addressing the immediate potential risks.  

Safety concerns

Among the most concerning risks are the human-level threats that artificial intelligence may pose to the safety of journalists and activists.  

Doxxing and smear campaigns are already among the many threats that journalists face online over their work. Doxxing is when someone publishes private or identifying information about someone — such as their address or phone number — on the internet.  

But now with generative AI, it will likely be even easier to dox reporters and harass them online, according to Barrett.  

“If you want to set up a campaign like that, you’re going to have to do a lot less work using generative AI systems,” Barrett said. “It’ll be easier to attack journalists.”  

Propaganda easy to make

Disinformation is another primary risk that the report highlights, because generative AI makes it easier to churn out propaganda.  

The report notes that if the Kremlin had access to generative AI in its disinformation campaign surrounding the 2016 U.S. presidential election, Moscow could have launched a more destructive and less expensive influence operation.  

Generative AI “is going to be a huge engine of efficiency, but it’s also going to make much more efficient the production of disinformation,” Barrett said.  

That bears implications for press freedom and media literacy, since studies indicate that exposure to misinformation and disinformation is linked to reduced trust in the media.  

Generative AI may also exacerbate financial issues plaguing newsrooms, according to the report. 

If people ask ChatGPT a question, for instance, and are happy with the summarized answer, they’re less likely to click on other links to news articles. That means shrinking traffic and therefore ad dollars for news sites, the report said.  

But artificial intelligence is far from all bad news for the media industry.  

For example, AI tools can help journalists research by scraping PDF files and analyzing data quickly. Artificial intelligence can also help fact-check sources and write headlines.  

In the report, Barrett and Hendrix caution the government against allowing this new industry to make the same mistakes as were made with social media platforms.  

“Generative AI doesn’t deserve the deference enjoyed for so long by social media companies,” they write.  

They recommend the government enhance federal authority to oversee AI companies and require more transparency from AI companies.  

“Congress, regulators, the public — and the industry, for that matter — need to pay attention to the immediate potential risks,” Barrett said. “And if the industry doesn’t move fast enough on that front, that’s something Congress needs to figure out a way to force them to pay attention to.”