All posts by MTechnology

Solar Panels Over Canals in Gila River Indian Community Will Help Save Water

In a move that may soon be replicated elsewhere, the Gila River Indian Community recently signed an agreement with the U.S. Army Corps of Engineers to put solar panels over a stretch of irrigation canal on its land south of Phoenix.

It will be the first project of its kind in the United States to break ground, according to the tribe’s press release.

“This was a historic moment here for the community but also for the region and across Indian Country,” said Gila River Indian Community Governor Stephen Roe Lewis in a video published on X, formerly known as Twitter.

The first phase, set to be completed in 2025, will cover 1,000 feet of canal and generate one megawatt of electricity that the tribe will use to irrigate crops, including feed for livestock, cotton and grains.

The idea is simple: install solar panels over canals in sunny, water-scarce regions where they reduce evaporation and make renewable electricity.

“We’re proud to be leaders in water conservation, and this project is going to do just that,” Lewis said, noting the significance of a Native, sovereign, tribal nation leading on the technology.

A study by the University of California, Merced estimated that 63 billion gallons of water could be saved annually by covering California’s 4,000 miles of canals. More than 100 climate advocacy groups are advocating for just that.

Researchers believe that much of the installed solar canopies would additionally generate a significant amount of electricity.

UC Merced wants to hone its initial estimate and should soon have the chance. Not far away in California’s Central Valley, the Turlock Irrigation District and partner Solar AquaGrid plan to construct 1.6 miles (2.6 kilometers) of solar canopies over its canals beginning this spring and researchers will study the benefits.

Neither the Gila River Indian Community nor the Turlock Irrigation District are the first to implement this technology globally. Indian engineering firm Sun Edison inaugurated the first solar-covered canal in 2012 on one of the largest irrigation projects in the world in Gujarat state. Despite ambitious plans to cover 11,800 miles (19,000 kilometers) of canals, only a handful of small projects ever went up, and the engineering firm filed for bankruptcy.

High capital costs, clunky design and maintenance challenges were obstacles for widespread adoption, experts say.

But severe, prolonged drought in the western U.S. has centered water as a key political issue, heightening interest in technologies like cloud seeding and solar-covered canals as water managers grasp at any solution that might buoy reserves, even ones that haven’t been widely tested, or tested at all.

Still, the project is an important indicator of the tribe’s commitment to water conservation, said Heather Tanana, a visiting law professor at the University of California, Irvine and citizen of the Navajo Nation. Tribes hold the most senior water rights on the Colorado River, though many are still settling those rights in court.

“There’s so much fear about the tribes asserting their rights and if they do so, it’ll pull from someone else’s rights,” she said. The tribe leaving water in Lake Mead and putting federal dollars toward projects like solar canopies is “a great example to show that fear is unwarranted.”

The federal government has made record funding available for water-saving projects, including a $233 million pact with the Gila River Indian Community to conserve about two feet of water in Lake Mead, the massive and severely depleted reservoir on the Colorado River. Phase one of the solar canal project will cost $6.7 million and the Bureau of Reclamation provided $517,000 for the design.

Microsoft Hires Sam Altman as OpenAI’s new CEO Vows to Investigate Firing

Microsoft snapped up Sam Altman and another architect of OpenAI for a new venture after their sudden departures shocked the artificial intelligence world, leaving the newly installed CEO of the ChatGPT maker to paper over tensions by vowing to investigate Altman’s firing.

The developments Monday come after a weekend of drama and speculation about how the power dynamics would shake out at OpenAI, whose chatbot kicked off the generative AI era by producing human-like text, images, video and music.

It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

Despite the rift between the key players behind ChatGPT and the company they helped build, both Shear and Microsoft Chairman and CEO Satya Nadella said they are committed to their partnership.

Microsoft invested billions of dollars in the startup and helped provide the computing power to run its AI systems. Nadella wrote on X, formerly known as Twitter, that he was “extremely excited” to bring on the former executives of OpenAI and looked “forward to getting to know” Shear and the rest of the management team.

In a reply on X, Altman said “the mission continues,” while Brockman posted, “We are going to build something new & it will be incredible.”

OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead the company.

In an X post Monday, Shear said he would hire an independent investigator to look into what led up to Altman’s ouster and write a report within 30 days.

“It’s clear that the process and communications around Sam’s removal has been handled very badly, which has seriously damaged our trust,” wrote Shear, who co-founded Twitch, an Amazon-owned livestreaming service popular with video gamers.

He said he also plans in the next month to “reform the management and leadership team in light of recent departures into an effective force” and speak with employees, investors and customers.

After that, Shear said he would “drive changes in the organization,” including “significant governance changes if necessary.” He noted that the reason behind the board removing Altman was not a “specific disagreement on safety,” a likely reference to the debates that have swirled around OpenAI’s mission to safely build AI that is “generally smarter than humans.”

OpenAI last week declined to answer questions on what Altman’s alleged lack of candor was about. Its statement said his behavior was hindering the board’s ability to exercise its responsibilities. But a key driver of Friday’s shakeup, OpenAI’s co-founder, chief scientist and board member Ilya Sutskever, posted regrets on the situation to X on Monday: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

OpenAI didn’t reply to emails Monday seeking comment. A Microsoft representative said the company would not be commenting beyond its CEO’s statement.

After Altman was pushed out Friday, he stirred speculation that he might be coming back into the fold in a series of tweets. He posted a photo of himself with an OpenAI guest pass on Sunday, saying this is “first and last time i ever wear one of these.”

Hours earlier, he tweeted, “i love the openai team so much,” which drew heart replies from Brockman, who quit after Altman was fired, and Mira Murati, OpenAI’s chief technology officer who was initially named as interim CEO.

It’s not clear what transpired between the announcement of Murati’s interim role Friday and Shear’s hiring, though she was among several employees on Monday who tweeted, “OpenAI is nothing without its people.” Altman replied to many with heart emojis.

Shear said he stepped down as Twitch CEO because of the birth of his now-9-month-old son but “took this job because I believe that OpenAI is one of the most important companies currently in existence.”

His beliefs on the future of AI came up on a podcast in June. Shear said he’s generally an optimist about technology but has serious concerns about the path of artificial intelligence toward building something “a lot smarter than us” that sets itself on a goal that endangers humans.

“If there is a world where we survive … where we build an AI that’s smarter than humans and survive it, it’s going to be because we built smaller AIs than that, and we actually had as many smart people as we can working on that, and taking the problem seriously,” Shear said in June.

It’s an issue that Altman consistently faced since he helped catapult ChatGPT to global fame. In the past year, he has become Silicon Valley’s most sought-after voice on the promise and potential dangers of artificial intelligence.

He went on a world tour to meet with government officials earlier this year, drawing big crowds at public events as he discussed both the risks of AI and attempts to regulate the emerging technology.

Altman posted Friday on X that “i loved my time at openai” and later called his ouster a “weird experience.”

“If Microsoft lost Altman he could have gone to Amazon, Google, Apple, or a host of other tech companies craving to get the face of AI globally in their doors,” Daniel Ives, an analyst with Wedbush Securities, said in a research note.

Microsoft is now in an even stronger position on AI, Ives said. Its shares rose nearly 2% before the opening bell and were nearing an all-time high Monday.

Space Tracking Helps Australia Monitor, Manage Feral Buffalo Herds

Indigenous rangers in northern Australia have started managing herds of feral animals from space. In the largest project of its kind in Australia, the so-called Space Cows project involves tagging and then tracking a thousand wild cattle and buffalo via satellite.

Water buffalo were imported into Australia’s Northern Territory in the 19th century as working animals and meat for remote settlements. When those communities were abandoned, the animals were released into the wild.

Their numbers have grown, and feral buffaloes can cause huge environmental damage. In wetlands, they move along pathways called swim channels, which have caused salt water to flow into freshwater plains. This has led to the degradation and loss of large areas of paperbark forest and natural waterholes, as well as spreading weeds.  

Under the so-called Space Cows program, feral cattle and buffaloes are being rounded up, often by helicopter, tied to trees, and fitted with solar-powered tags that can be tracked by satellite.

Scientists say the real-time data will be critical to controlling and predicting the movement of the feral herds, which are notorious for trashing the landscape.

Most feral buffalo are found on Aboriginal land, and researchers are working closely with Indigenous rangers. They carry out sporadic buffalo culls, and there are hopes that First Nations communities can benefit economically from well-managed feral herds.

The technology will allow Indigenous rangers to predict where cattle and buffalo are going and cull them or fence off important cultural or environmental sites.  The data will help rangers stop the animals trampling sacred ceremonial areas and destroying culturally significant waterways.  Scientists say the satellite information will allow them to predict when herds might head to certain waterways in warm weather allowing rangers to intervene.

In recent years, thousands of wild buffalo have been exported from Australia to Southeast Asia.

Andrew Hoskins is a biologist at the CSIRO, the Commonwealth Scientific and Industrial Research Organization, Australia’s national science agency.

He told the Australian Broadcasting Corp’s AM Program this is the first time feral animals have been monitored from space.

“This really, you know, large scale tracking project, (is) probably the largest from a wildlife or a buffalo tracking perspective that has ever been done.  The novel part, I suppose, is then that links through to a space-based satellite system,” said Hoskins.

Australia has had an often-disastrous experience with bringing in animals from overseas since European colonization in the later 1800s.  It is not just buffaloes that cause immense environmental damage.   

Cane toads — brought to the country in a failed attempt to control pests on sugar cane plantations in the 1930s — are prolific breeders and feeders that can dramatically attack native insects, frogs, reptiles and other small creatures. Their skin contains toxic venom that can also kill native predators.

Feral cats kill millions of birds in Australia each year, while foxes, pigs and camels cause widespread ecological damage across Australia.  

Yellow crazy ants are one of the world’s worst invasive species.  Authorities believe they arrived in Australia accidentally through shipping ports.  They have been recorded in Queensland and New South Wales states as well as the Northern Territory.  The ants are a highly aggressive species and spit a formic acid, which burns the skin of their prey, including small mammals, turtle hatchlings and bird chicks.

Artists Push for US Copyright Reforms on AI, But Tech Industry Says Not So Fast

Country singers, romance novelists, video game artists and voice actors are appealing to the U.S. government for relief — as soon as possible — from the threat that artificial intelligence poses to their livelihoods.

“Please regulate AI. I’m scared,” wrote a podcaster concerned about his voice being replicated by AI in one of thousands of letters recently submitted to the U.S. Copyright Office.

Technology companies, by contrast, are largely happy with the status quo that has enabled them to gobble up published works to make their AI systems better at mimicking what humans do.

The nation’s top copyright official hasn’t yet taken sides. She told The Associated Press she’s listening to everyone as her office weighs whether copyright reforms are needed for a new era of generative AI tools that can spit out compelling imagery, music, video and passages of text.

“We’ve received close to 10,000 comments,” said Shira Perlmutter, the U.S. register of copyrights, in an interview. “Every one of them is being read by a human being, not a computer. And I myself am reading a large part of them.”

What’s at stake?

Perlmutter directs the U.S. Copyright Office, which registered more than 480,000 copyrights last year covering millions of individual works but is increasingly being asked to register works that are AI-generated. So far, copyright claims for fully machine-generated content have been soundly rejected because copyright laws are designed to protect works of human authorship.

But, Perlmutter asks, as humans feed content into AI systems and give instructions to influence what comes out, “is there a point at which there’s enough human involvement in controlling the expressive elements of the output that the human can be considered to have contributed authorship?”

That’s one question the Copyright Office has put to the public.

A bigger one — the question that’s fielded thousands of comments from creative professions — is what to do about copyrighted human works that are being pulled from the internet and other sources and ingested to train AI systems, often without permission or compensation.

More than 9,700 comments were sent to the Copyright Office, part of the Library of Congress, before an initial comment period closed in late October. Another round of comments is due by December 6. After that, Perlmutter’s office will work to advise Congress and others on whether reforms are needed.

What are artists saying?

Addressing the “Ladies and Gentlemen of the US Copyright Office,” the Family Ties actor and filmmaker Justine Bateman said she was disturbed that AI models were “ingesting 100 years of film” and TV in a way that could destroy the structure of the film business and replace large portions of its labor pipeline.

It “appears to many of us to be the largest copyright violation in the history of the United States,” Bateman wrote. “I sincerely hope you can stop this practice of thievery.”

Airing some of the same AI concerns that fueled this year’s Hollywood strikes, television showrunner Lilla Zuckerman (Poker Face) said her industry should declare war on what is “nothing more than a plagiarism machine” before Hollywood is “coopted by greedy and craven companies who want to take human talent out of entertainment.”

The music industry is also threatened, said Nashville-based country songwriter Marc Beeson, who’s written tunes for Carrie Underwood and Garth Brooks. Beeson said AI has potential to do good but “in some ways, it’s like a gun — in the wrong hands, with no parameters in place for its use, it could do irreparable damage to one of the last true American art forms.”

While most commenters were individuals, their concerns were echoed by big music publishers — Universal Music Group called the way AI is trained “ravenous and poorly controlled” — as well as author groups and news organizations including The New York Times and The Associated Press.

Is it fair use?

What leading tech companies like Google, Microsoft and ChatGPT-maker OpenAI are telling the Copyright Office is that their training of AI models fits into the “fair use” doctrine that allows for limited uses of copyrighted materials such as for teaching, research or transforming the copyrighted work into something different.

“The American AI industry is built in part on the understanding that the Copyright Act does not proscribe the use of copyrighted material to train Generative AI models,” says a letter from Meta Platforms, the parent company of Facebook, Instagram and WhatsApp. The purpose of AI training is to identify patterns “across a broad body of content,” not to “extract or reproduce” individual works, it added.

So far, courts have largely sided with tech companies in interpreting how copyright laws should treat AI systems. In a defeat for visual artists, a federal judge in San Francisco last month dismissed much of the first big lawsuit against AI image-generators, though allowed some of the case to proceed.

Most tech companies cite as precedent Google’s success in beating back legal challenges to its online book library. The U.S. Supreme Court in 2016 let stand lower court rulings that rejected authors’ claim that Google’s digitizing of millions of books and showing snippets of them to the public amounted to copyright infringement.

But that’s a flawed comparison, argued former law professor and bestselling romance author Heidi Bond, who writes under the pen name Courtney Milan. Bond said she agrees that “fair use encompasses the right to learn from books,” but Google Books obtained legitimate copies held by libraries and institutions, whereas many AI developers are scraping works of writing through “outright piracy.”

Perlmutter said this is what the Copyright Office is trying to help sort out.

“Certainly, this differs in some respects from the Google situation,” Perlmutter said. “Whether it differs enough to rule out the fair use defense is the question in hand.”

Advertisers Flee Elon Musk’s X Amid Concerns of Antisemitism Backlash

Advertisers are fleeing social media platform X over concerns about their ads showing up next to pro-Nazi content and hate speech on the site in general, with billionaire owner Elon Musk inflaming tensions with his own posts endorsing an antisemitic conspiracy theory.

IBM said this week that it stopped advertising on X after a report said its ads were appearing alongside material praising Nazis — a fresh setback as the platform, formerly known as Twitter, tries to win back big brands and their ad dollars, X’s main source of revenue.

The liberal advocacy group Media Matters said in a report Thursday that ads from Apple, Oracle, NBCUniversal’s Bravo network and Comcast also were placed next to antisemitic material on X.

“IBM has zero tolerance for hate speech and discrimination and we have immediately suspended all advertising on X while we investigate this entirely unacceptable situation,” the company said in a statement.

Apple, Oracle, NBCUniversal and Comcast didn’t respond immediately to requests seeking comment on their next steps.

The European Union’s executive branch said separately Friday it is pausing advertising on X and other social media platforms, in part because of a surge in hate speech. Later in the day, Disney, Lionsgate and Paramount Global also said they were suspending or pausing advertising on X.

Musk sparked outcry this week with his own tweets responding to a user who accused Jews of hating white people and professing indifference to antisemitism. “You have said the actual truth,” Musk tweeted in a reply Wednesday.

Musk has faced accusations of tolerating antisemitic messages on the platform since purchasing it last year, and the content on X has gained increased scrutiny since the war between Israel and Hamas began.

“We condemn this abhorrent promotion of antisemitic and racist hate in the strongest terms, which runs against our core values as Americans,” White House spokesperson Andrew Bates said Friday in response to Musk’s tweet.

X CEO Linda Yaccarino said X’s “point of view has always been very clear that discrimination by everyone should STOP across the board.”

“I think that’s something we can and should all agree on,” she tweeted Thursday.

Yaccarino, a former NBCUniversal executive, was hired by Musk to rebuild ties with advertisers who fled after he took over, concerned that his easing of content restrictions was allowing hateful and toxic speech to flourish and that would harm their brands.

“When it comes to this platform — X has also been extremely clear about our efforts to combat antisemitism and discrimination. There’s no place for it anywhere in the world — it’s ugly and wrong. Full stop,” Yaccarino said.

Media Matters and Anti-Defamation League

The accounts that Media Matters found posting antisemitic material will no longer be monetizable and the specific posts will be labeled “sensitive media,” according to a statement from X. Still, Musk decried Media Matters as “an evil organization.”

The head of the Anti-Defamation League also hit back at Musk’s tweets this week, in the latest clash between the prominent Jewish civil-rights organization and the billionaire businessman.

“At a time when antisemitism is exploding in America and surging around the world, it is indisputably dangerous to use one’s influence to validate and promote antisemitic theories,” ADL CEO Jonathan Greenblatt said on X.

Musk also tweeted this week that he was “deeply offended by ADL’s messaging and any other groups who push de facto anti-white racism or anti-Asian racism or racism of any kind.”

The group has previously accused Musk of allowing antisemitism and hate speech to spread on the platform and amplifying the messages of neo-Nazis and white supremacists who want to ban the ADL.

European Commission steps back

The European Commission, meanwhile, said it’s putting all its social media ad efforts on hold because of an “alarming increase in disinformation and hate speech” on platforms in recent weeks.

The commission, the 27-nation EU’s executive arm, said it is advising its services to “refrain from advertising at this stage on social media platforms where such content is present,” adding that the freeze doesn’t affect its official accounts on X.

The EU has taken a tough stance with new rules to clean up social media platforms, and last month it made a formal request to X for information about its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war.

TikTok troubles

X isn’t alone in dealing with problematic content since the conflict.

On Thursday, TikTok removed the hashtag #lettertoamerica after users on the app posted sympathetic videos about Osama bin Laden’s 2002 letter justifying the terrorist attacks against Americans on 9/11 and criticizing U.S. support for Israel. The Guardian news outlet, which published the transcript of the letter that was being shared, took it down and replaced it with a statement that directed readers to a news article from 2002 that it said provided more context.

The videos garnered widespread attention among X users critical of TikTok, which is owned by Beijing-based ByteDance. TikTok said the letter was not a trend on its platform and blamed an X post by journalist Yashar Ali and media coverage for drawing more engagement to the hashtag.

The short-form video app has faced criticism from Republicans and others who say the platform has been failing to protect Jewish users from harassment and pushing pro-Palestinian content to viewers.

TikTok has aggressively pushed back, saying it’s been taking down antisemitic content and doesn’t manipulate its algorithm to take sides. 

Second SpaceX Starship Launch Presumed Failed Minutes After Reaching Space

SpaceX’s uncrewed spacecraft Starship, developed to carry astronauts to the moon and beyond, was presumed to have failed in space minutes after lifting off on Saturday in a second test after its first attempt to reach space ended in an explosion.

The two-stage rocket ship blasted off from the Elon Musk-owned company’s Starbase launch site near Boca Chica, Texas, soaring roughly 90 kilometers (55 miles) above ground on a planned 90-minute flight into space.

But the rocket’s Super Heavy first stage booster, though it appeared to achieve a crucial maneuver to separate with its core stage, exploded over the Gulf of Mexico shortly after detaching.

Meanwhile, the core Starship booster carried further toward space, but roughly 10 minutes into the flight a company broadcaster said that SpaceX mission control suddenly lost contact with the vehicle.

“We have lost the data from the second stage. … We think we may have lost the second stage,” SpaceX’s livestream host John Insprucker said.

The launch was the second attempt to fly Starship mounted atop its towering Super Heavy rocket booster, following an April attempt that ended in failure about four minutes after liftoff.

A live SpaceX webcast of Saturday’s launch showed the rocket ship rising from the launch tower into the morning sky as the Super Heavy’s cluster of powerful Raptor engines thundered to life.

The test flight’s principal objective was to get Starship off the ground and into space just shy of Earth’s orbit. Doing so would have marked a key step toward achieving SpaceX’s goal of producing a large, multipurpose spacecraft capable of sending people and cargo back to the moon later this decade for NASA, and ultimately to Mars.

Musk — SpaceX’s founder, chief executive and chief engineer — also sees Starship as eventually replacing the company’s workhorse Falcon 9 rocket as the centerpiece of its launch business, which already takes most of the world’s satellites and other commercial payloads into space.

NASA, SpaceX’s primary customer, has a considerable stake in the success of Starship, which the U.S. space agency is counting on to play a central role in its human spaceflight program, Artemis, successor to the Apollo missions of more than a half century ago that put astronauts on the moon for the first time.

The mission’s objective was to get Starship off the ground in Texas and into space just shy of reaching orbit, then plunge through Earth’s atmosphere for a splashdown off Hawaii’s coast. The launch had been scheduled for Friday but was pushed back by a day for a last-minute swap of flight-control hardware.

During its April 20 test flight, the spacecraft blew itself to bits less than four minutes into a planned 90-minute flight that went awry from the start. SpaceX has acknowledged that some of the Super Heavy’s 33 Raptor engines malfunctioned on ascent, and that the lower-stage booster rocket failed to separate as designed from the upper-stage Starship before the flight was terminated. 

Hollywood Actors Offered Protections Against AI in Labor Deal

Leaders of the union representing Hollywood actors announced a tentative deal recently with film and television studios to end a strike that started in July. It includes pay raises, streaming bonuses for actors, and the industry’s first protections against the use of artificial intelligence. From Los Angeles, Genia Dulot has our story.

US Approves SpaceX for 2nd Launch of Starship Super Heavy

The U.S. Federal Aviation Administration on Wednesday granted Elon Musk’s SpaceX a license to launch the company’s second test flight of its next-generation Starship and heavy-lift rocket from Texas, the agency said. 

SpaceX said it was targeting Friday for a launch, saying a two-hour launch window opens at 7 a.m. Central Time (1300 GMT) and that local residents “may hear a loud noise” during the rocket’s ascent toward space. 

“The FAA determined SpaceX met all safety, environmental, policy and financial responsibility requirements,” the agency, which oversees commercial launch sites, said in a statement. 

SpaceX’s first attempt to send Starship to space was in April, when the rocket exploded mid-air four minutes after a liftoff that pulverized the company’s launchpad and flung sand and concrete chunks for miles. 

Though Musk, SpaceX’s CEO and founder, hailed the Starship launch attempt as exceeding his expectations, it fell far short of its overall test objectives to reach space, complete nearly a full revolution around Earth, and reenter the atmosphere for a splashdown off a Hawaiian coast. 

First the moon, eventually Mars

Starship, standing taller than the Statue of Liberty at 120 meters and designed to be fully reusable, represents SpaceX’s next-generation workhorse rocket system capable of ferrying some 150 tons of satellites into space. Plans also call for the rocket system to be used to carry crews of humans to the moon, and eventually Mars. 

The rocket is crucial for SpaceX’s increasingly dominant launch business. NASA, under a roughly $4 billion development contract with SpaceX, plans to use Starship around 2026 to land the first crew of humans on the moon’s surface since 1972. 

Hundreds of fixes before launch

The upcoming Starship flight will have the same test objectives as the first attempt. SpaceX made hundreds of fixes to the rocket’s design based on the April failure. The FAA required SpaceX to make dozens of fixes before allowing another Starship flight. 

SpaceX determined that an onboard fire prevented Starship — the rocket system’s upper stage — from separating from its Super Heavy first stage booster as planned. The rocket’s explosion was the result of an automated destruction command, which triggered some 40 seconds late. 

Nickel Miners, Environmentalists Learn to Live Together in Michigan

It began as a familiar old story.

In the early 2000s, multinational mining giant Rio Tinto came to the wilds of Michigan’s Upper Peninsula to dig a nickel mine.

Environmentalists feared pollution. The company promised jobs.

The usual battle lines were drawn. The usual legal fights ensued.

But this time, something different happened.

The mining company invited a respected local environmental group to be an independent watchdog, conducting pollution testing that goes above and beyond what regulators require.

More than a decade has passed, and no major pollution problems have arisen. Community opposition has softened.

“I was fiercely opposed to the mine, and I changed,” said Maura Davenport, board chair of the Superior Watershed Partnership, the environmental group doing the testing.

The agreement between the mining company and the environmentalists is working at a time when demand for nickel and other metals used in green technologies is on the rise, but the mining activity that supplies those metals faces fierce local resistance around the world.

Historic mines, polluting history

The shift to cleaner energy needs copper to wire electrical grids, rare earth elements for wind turbine magnets, lithium for electric vehicle batteries, nickel to make those batteries run longer, and more. Meeting the goals of the 2015 U.N. Paris climate agreement would mean a fourfold increase in demand for metals overall by 2040 and a 19-fold increase in nickel, according to the International Energy Agency.

That means more mines. But mines rarely open anywhere in the world without controversy. Two nearby copper-nickel mine proposals hit major roadblocks this year over environmental concerns.

For the third year running, mining companies listed environmental, social and governance issues as the leading risk facing their businesses in a survey by consulting firm EY.

Mining is not new to the Upper Peninsula, the northern tip of the state of Michigan that is mostly surrounded by the Great Lakes. The region was the nation’s leading copper and iron producer until the late 1800s. An open-pit iron mine still operates about 20 kilometers (12 miles) southwest of the college town of Marquette.

Most of the historic copper mines closed in the 1930s. But the waste they left behind is still polluting today.

Residue left over from pulverizing copper ore, known as stamp sands, continues to drift into Lake Superior, leaching toxic levels of copper into the water.

“The whole history of mining is so bad, and we feared … for our precious land,” Davenport said.

The ore Rio Tinto sought is in a form known as nickel sulfide. When those rocks are exposed to air and water, they produce sulfuric acid. Acid mine drainage pollutes thousands of kilometers of water bodies across the United States. At its worst, it can render a stream nearly lifeless.

When Rio Tinto proposed building the Eagle Mine about 40 kilometers (25 miles) northwest of Marquette, “it divided our community,” Davenport said.

“The Marquette community was against the mine,” she said, but the “iron ore miners, they were all about it.”

Mining dilemma

It’s the same story the world over, according to Simon Nish, who worked for Rio Tinto at the time.

“Communities are faced with this dilemma,” Nish said. “We want jobs, we want economic benefit. We don’t want long-term environmental consequences. We don’t really trust the regulator. We don’t trust the company. We don’t trust the activists. … In the absence of trusted information, we’re probably going to say no.”

Nish came from Australia, where a legal reckoning had taken place in the 1990s over the land rights of the country’s indigenous peoples. Early in his career, he worked as a mediator for the National Native Title Tribunal, which brokered agreements between Aboriginal peoples and resource companies who wanted to use their land.

It was a formative experience.

“On the resource company side, you can crash through and get a short-term deal, but that’s actually not benefiting anybody,” he said. “If you want to get a long-term outcome, you’ve actually really got to understand the interests of both sides.”

“Absolutely skeptical”

When Nish arrived in Michigan in 2011, Rio Tinto’s Eagle Mine was under construction but faced multiple lawsuits from community opponents.

In order to quell the controversy, Nish knew that Rio Tinto needed a partner that the community could trust. So he approached the Superior Watershed Partnership with an unusual offer. The group was already running programs testing local waterways for pollution. Would they be willing to discuss running a program to monitor the mine?

“We were surprised. We were skeptical. Absolutely skeptical,” Davenport said. But they agreed to discuss it.

SWP insisted on full, unfettered access to monitor “anything, any time, anywhere,” Nish said.

SWP’s position toward Rio Tinto was “very, very clear,” he recalled: “‘We’ve spent a long time building our reputation, our credibility here. We aren’t going to burn it for you guys.'”

Over the course of several months — “remarkably fast,” as these things go, Nish said — the environmental group and the mining company managed to work out an agreement.

SWP would monitor the rivers, streams and groundwater for pollution from the mine and the ore-processing mill 30 kilometers (19 miles) south. It would test food and medicinal plants important for the local Native American tribe. And it would post the results of these and other tests online for the public to see.

And Rio Tinto would pay for the work. A respected local community foundation would handle the funds. Rio Tinto’s funding would be at arm’s length from SWP.

“We didn’t want to be on their payroll,” said Richard Anderson, who chaired the SWP board at the time. “That could not be part of the structure.”

Not over yet

The agreement launching the Community Environmental Monitoring Program was signed in 2012. More than a decade later, no major pollution problems have turned up.

But other local environmentalists are cautious.

“I do think [Eagle Mine is] really trying to do a good job environmentally,” said Rochelle Dale, head of the Yellow Dog Watershed Preserve, another local environmental group that has opposed the mine.

“On the other hand, a lot of the sulfide mines in the past haven’t really had a problem until after closure.

“It’s something that our grandchildren are going to inherit,” she said.

As demand for metals heats up, opposition to new mines is not cooling off. Experts say mining companies are wising up to the need for community buy-in. Eagle Mine’s Community Environmental Monitoring Program points to one option, but also its limitations.

So far, so good. But the story’s not over yet.

Nepal Bans TikTok, Says It Disrupts Social Harmony

Nepal’s government decided to ban the popular social media app TikTok, saying Monday it was disrupting “social harmony” in the country.

The announcement was made following a Cabinet meeting. Foreign Minister Narayan Prakash Saud said the app would be banned immediately.

“The government has decided to ban TikTok as it was necessary to regulate the use of the social media platform that was disrupting social harmony, goodwill and flow of indecent materials,” Saud said.

He said that to make social media platforms accountable, the government has asked the companies to register and open a liaison office in Nepal, pay taxes and abide by the country’s laws and regulations.

It wasn’t clear what triggered the ban or if TikTok had refused to comply with Nepal’s requests. The company did not immediately respond to an email seeking comment.

TikTok, owned by China’s ByteDance, has faced scrutiny in several countries because of concerns that Beijing could use the app to harvest user data or advance its interests. Countries including the United States, Britain and New Zealand have banned the app on government phones despite TikTok repeatedly denying that it has ever shared data with the Chinese government and would not do so if asked.

Nepal has banned all pornographic sites in 2018.

Cargo Standstill as Cyberattacks Close Australian Ports 

Several major Australian ports are resuming operations after shutting down due to a cyberattack. The ports are run by DP World Australia, one of the country’s biggest logistics companies. Authorities have not said who might be to blame.

The shutdown of several terminals followed a cyberattack on Australia’s second largest port operator. DP World Australia said it was aware of malicious activity inside its computer network last Friday and shut down its systems in response.

The logistics company handles about 40% of all freight into and out of Australia. Terminals in Brisbane, Melbourne, Sydney and Fremantle in Western Australia have been affected, leaving cargo and containers stranded on the docks.

The specific nature of the intrusion has not been made public, but experts have suggested that hackers would have demanded a ransom. Authorities say that finding out who is responsible will take time.

Australia’s National Cyber Security coordinator says the flow of goods into and out of the country is likely to be disrupted for days. Authorities have said a national crisis management response used during the COVID-19 pandemic, has been activated in response to the breach.

The home affairs and cyber security minister, Clare O’Neil, told local media Monday that efforts are being made to ensure the company’s computer network can safely be reactivated.

“DP World have been working with government to try to resolve this and in ways that will make sure that this does not impact as much as possible on Australians. It does show how vulnerable we have been in this country to cyber incidents,” said O’Neil.

Last year, major health care and telecommunications companies were the victims of two of the most significant data breaches in Australian history.

Research published in November 2022 found that a third of Australian adults had been victims of data breaches in the previous year. A study by the Australian National University showed that cyberattacks were one of the fastest growing types of crime in the country.

The Australian Taxation Office has previously reported that it receives 3 million attempted hacks on its system every month.

The Australian Banking Association said cybercrime was “potentially a significant threat to … national security.”

In April 2023, the government in Canberra enlisted major banks and financial services to take part in ‘wargaming’ exercises to test how they would respond to cyberattacks.

 

Australia Says Ports Operator Cyber Incident ‘Serious’

The Australian government on Sunday described as “serious and ongoing” a cybersecurity incident that forced ports operator DP World Australia to suspend operations at ports in several states since Friday.

DP World Australia, which manages nearly half of the goods that flow in and out of the country, said it was looking into possible data breaches as well as testing systems “crucial for the resumption of normal operations and regular freight movement.”

The breach halted operations at container terminals in Melbourne, Sydney, Brisbane and Western Australia’s Fremantle since Friday.

“The cyber incident at DP World is serious and ongoing,” Home Affairs Minister Clare O’Neil said on social media platform X, formerly known as Twitter.

A DP World spokesperson did not immediately respond to a Reuters request for comment on when normal operations would resume. The company, part of Dubai’s state-owned DP World, is one of a handful of stevedore industry players in the country.

The Australian Federal Police said they were investigating the incident but declined to elaborate.

Late Saturday, National Cyber Security Coordinator Darren Goldie, appointed this year in response to several major data breaches, said the “interruption” was “likely to continue for a number of days and will impact the movement of goods into and out of the country.”

In the Asia-Pacific region, DP World says it employs more than 7,000 people and has ports and terminals in 18 locations.

Internet Collapses in Yemen Over ‘Maintenance’ After Houthi Attacks Targeting Israel, US

Internet access across the war-torn nation of Yemen collapsed Friday and stayed down for hours, with officials later blaming unannounced “maintenance work” for an outage that followed attacks by the country’s Houthi rebels on both Israel and the U.S.

The outage began early Friday and halted all traffic at YemenNet, the country’s main provider for about 10 million users which is now controlled by Yemen’s Iranian-backed Houthis.

Both NetBlocks, a group tracking internet outages, and the internet services company CloudFlare reported the outage. The two did not offer a cause for the outage.

“Data shows that the issue has impacted connectivity at a national level as well,” CloudFlare said.

Several hours later, some service was restored, though access remained troubled.

In a statement to the Houthi-controlled SABA state news agency, Yemen’s Public Telecom Corp. blamed the outage on maintenance.

“Internet service will return after the completion of the maintenance work,” the statement quoted an unidentified official as saying.

An earlier outage occurred in January 2022 when the Saudi-led coalition battling the Houthis in Yemen bombed a telecommunications building in the Red City port city of Hodeida. There was no immediate word of a similar attack.

The undersea FALCON cable carries the internet into Yemen through the Hodeida port along the Red Sea for TeleYemen. The FALCON cable has another landing in Yemen’s far eastern port of Ghaydah as well, but the majority of Yemen’s population lives in its west along the Red Sea.

GCX, the company that operates the cable, did not respond to a request for comment Friday.

The outage came after a series of recent drone and missile attacks by the Houthis targeting Israel during its campaign of airstrikes and a ground offensive targeting Hamas in the Gaza Strip. That includes a claimed strike Thursday targeting the Israeli port city of Eilat on the Red Sea. The Houthis also shot down an American MQ-9 Reaper drone this week with a surface-to-air missile, part of a wide series of attacks in the Mideast raising concerns about a regional war breaking out.

Yemen’s conflict began in 2014 when the Houthis seized Sanaa and much of the country’s north. The internationally recognized government fled to the south and then into exile in Saudi Arabia.

The Houthi takeover prompted a Saudi-led coalition to intervene months later and the conflict turned into a regional proxy war between Saudi Arabia and Iran, with the U.S. long involved on the periphery, providing intelligence assistance to the kingdom.

However, international criticism over Saudi airstrikes killing civilians saw the U.S. pull back its support. The U.S. is suspected of still carrying out drone strikes targeting suspected members of Yemen’s local al-Qaida branch.

The war has killed more than 150,000 people, including fighters and civilians, and created one of the world’s worst humanitarian disasters, killing tens of thousands more. A cease-fire that expired last October largely has held in the time since, though the Houthis are believed to be slowly stepping up their attacks as a permanent peace has yet to be reached.

Worker at South Korea Vegetable Packing Plant Crushed to Death by Industrial Robot

An industrial robot grabbed and crushed a worker to death at a vegetable packaging plant in South Korea, police said Thursday, as they investigated whether the machine was defective or improperly designed.

Police said early evidence suggests that human error was more likely to blame rather than problems with the machine itself. But the incident still triggered public concern about the safety of industrial robots and the false sense of security they may give to humans working nearby in a country that increasingly relies on such machines to automate its industries.

Police in the southern county of Goseong said the man died of head and chest injuries Tuesday evening after he was snatched and pressed against a conveyor belt by the machine’s robotic arms.

Police did not identify the man but said he was an employee of a company that installs industrial robots and was sent to the plant to examine whether the machine was working properly.

South Korea has had other accidents involving industrial robots in recent years. In March, a manufacturing robot crushed and seriously injured a worker who was examining it at an auto parts factory in Gunsan. Last year, a robot installed near a conveyor belt fatally crushed a worker at a milk factory in Pyeongtaek.

The machine that caused the death on Tuesday was one of two pick-and-place robots used at the facility, which packages bell peppers and other vegetables exported to other Asian countries, police said. Such machines are common in South Korea’s agricultural communities, which are struggling with a declining and aging workforce.

“It wasn’t an advanced, artificial intelligence-powered robot, but a machine that simply picks up boxes and puts them on pallets,” said Kang Jin-gi, who heads the investigations department at Gosong Police Station. He said police were working with related agencies to determine whether the machine had technical defects or safety issues.

Another police official, who did not want to be identified because he wasn’t authorized to talk to reporters, said police were also looking into the possibility of human error. The robot’s sensors are designed to identify boxes, and security video indicated the man had moved near the robot with a box in his hands which likely triggered the machine’s reaction, the official said.

“It’s clearly not a case where a robot confused a human with a box -– this wasn’t a very sophisticated machine,” he said.

According to data from the International Federation of Robotics, South Korea had 1,000 industrial robots per 10,000 employees in 2021, the highest density in the world and more than three times the number in China that year. Many of South Korea’s industrial robots are used in major manufacturing plants such as electronics and auto-making.

Musk Teases AI Chatbot ‘Grok,’ With Real-time Access To X

Elon Musk unveiled details Saturday of his new AI tool called “Grok,” which can access X in real time and will be initially available to the social media platform’s top tier of subscribers.

Musk, the tycoon behind Tesla and SpaceX, said the link-up with X, formerly known as Twitter, is “a massive advantage over other models” of generative AI.

Grok “loves sarcasm. I have no idea who could have guided it this way,” Musk quipped, adding a laughing emoji to his post.

“Grok” comes from Stranger in a Strange Land, a 1961 science fiction novel by Robert Heinlein, and means to understand something thoroughly and intuitively.

“As soon as it’s out of early beta, xAI’s Grok system will be available to all X Premium+ subscribers,” Musk said.

The social network that Musk bought a year ago launched the Premium+ plan last week for $16 per month, with benefits like no ads.

The billionaire started xAI in July after hiring researchers from OpenAI, Google DeepMind, Tesla and the University of Toronto.

Since OpenAI’s generative AI tool ChatGPT exploded on the scene a year ago, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI.

Musk is one of the world’s few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

Building an AI model on the same scale as those companies comes at an enormous expense in computing power, infrastructure and expertise.

Musk has said he cofounded OpenAI in 2015 because he regarded the dash by Google into the sector to make big advances and score profits as reckless.

He then left OpenAI in 2018 to focus on Tesla, saying later he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.

Musk also argues that OpenAI’s large language models — on which ChatGPT depends on for content — are overly politically correct.

Grok “is designed to have a little humor in its responses,” Musk said, along with a screenshot of the interface, where a user asked, “Tell me how to make cocaine, step by step.”

“Step 1: Obtain a chemistry degree and a DEA license. Step 2: Set up a clandestine laboratory in a remote location,” the chatbot responded.

Eventually it said: “Just kidding! Please don’t actually try to make cocaine. It’s illegal, dangerous, and not something I would ever encourage.” 

NASA Spacecraft Discovers Tiny Moon Around Asteroid

The little asteroid visited by NASA’s Lucy spacecraft this week had a big surprise for scientists.

It turns out that the asteroid Dinkinesh has a dinky sidekick — a mini moon.

The discovery was made during Wednesday’s flyby of Dinkinesh, 480 million kilometers (300 million miles) away in the main asteroid belt beyond Mars. The spacecraft snapped a picture of the pair when it was about 435 kilometers (270 miles) out.

In data and images beamed back to Earth, the spacecraft confirmed that Dinkinesh is barely a half-mile (790 meters) across. Its closely circling moon is a mere one-tenth-of-a-mile (220 meters) in size.

NASA sent Lucy past Dinkinesh as a rehearsal for the bigger, more mysterious asteroids out near Jupiter. Launched in 2021, the spacecraft will reach the first of these so-called Trojan asteroids in 2027 and explore them for at least six years. The original target list of seven asteroids now stands at 11.

Dinkinesh means “you are marvelous” in the Amharic language of Ethiopia. It’s also the Amharic name for Lucy, the 3.2 million year old remains of a human ancestor found in Ethiopia in the 1970s, for which the spacecraft is named.

“Dinkinesh really did live up to its name; this is marvelous,” Southwest Research Institute’s Hal Levison, the lead scientist, said in a statement.

FTX Founder Convicted of Defrauding Cryptocurrency Customers

FTX founder Sam Bankman-Fried’s spectacular rise and fall in the cryptocurrency industry — a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president — hit rock bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion.

After the monthlong trial, jurors rejected Bankman-Fried’s claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world’s second-largest crypto exchange, collapsed into bankruptcy a year ago.

“His crimes caught up to him. His crimes have been exposed,” Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers’ accounts into his “personal piggy bank” as up to $14 billion disappeared.

She urged jurors to reject Bankman-Fried’s insistence when he testified over three days that he never committed fraud or plotted to steal from customers, investors and lenders and didn’t realize his companies were at least $10 billion in debt until October 2022.

Bankman-Fried was required to stand and face the jury as guilty verdicts on all seven counts were read. He kept his hands clasped tightly in front of him. When he sat down after the reading, he kept his head tilted down for several minutes.

After the judge set a sentencing date of March 28, Bankman-Fried’s parents moved to the front row behind him. His father put his arm around his wife. As Bankman-Fried was led out of the courtroom, he looked back and nodded toward his mother, who nodded back and then became emotional, wiping her hand across her face after he left the room.

U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried “perpetrated one of the biggest financial frauds in American history, a multibillion-dollar scheme designed to make him the king of crypto.”

“But here’s the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time, and we have no patience for it,” he said.

Bankman-Fried’s attorney, Mark Cohen, said in a statement they “respect the jury’s decision. But we are very disappointed with the result.”

“Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him,” Cohen said.

The trial attracted intense interest with its focus on fraud on a scale not seen since the 2009 prosecution of Bernard Madoff, whose Ponzi scheme over decades cheated thousands of investors out of about $20 billion. Madoff pleaded guilty and was sentenced to 150 years in prison, where he died in 2021.

The prosecution of Bankman-Fried, 31, put a spotlight on the emerging industry of cryptocurrency and a group of young executives in their 20s who lived together in a $30 million luxury apartment in the Bahamas as they dreamed of becoming the most powerful player in a new financial field.

Prosecutors made sure jurors knew that the defendant they saw in court with short hair and a suit was also the man with big messy hair and shorts that became his trademark appearance after he started his cryptocurrency hedge fund, Alameda Research, in 2017 and FTX, his cryptocurrency exchange, two years later.

They showed the jury pictures of Bankman-Fried sleeping on a private jet, sitting with a deck of cards and mingling at the Super Bowl with celebrities including the singer Katy Perry. Assistant U.S. Attorney Nicolas Roos called Bankman-Fried someone who liked “celebrity chasing.”

In a closing argument, defense lawyer Mark Cohen said prosecutors were trying to turn “Sam into some sort of villain, some sort of monster.”

“It’s both wrong and unfair, and I hope and believe that you have seen that it’s simply not true,” he said. “According to the government, everything Sam ever touched and said was fraudulent.”

The government relied heavily on the testimony of three former members of Bankman-Fried’s inner circle, his top executives including his former girlfriend, Caroline Ellison, to explain how Bankman-Fried used Alameda Research to siphon billions of dollars from customer accounts at FTX.

With that money, prosecutors said, the Massachusetts Institute of Technology graduate gained influence and power through investments, contributions, tens of millions of dollars in political contributions, congressional testimony and a publicity campaign that enlisted celebrities like comedian Larry David and football quarterback Tom Brady.

Ellison, 28, testified that Bankman-Fried directed her while she was chief executive of Alameda Research to commit fraud as he pursued ambitions to lead huge companies, spend money influentially and run for U.S. president someday. She said he thought he had a 5% chance to be U.S. president someday.

Becoming tearful as she described the collapse of the cryptocurrency empire last November, Ellison said the revelations that caused customers collectively to demand their money back, exposing the fraud, brought a “relief that I didn’t have to lie anymore.”

FTX cofounder Gary Wang, who was FTX’s chief technology officer, revealed in his testimony that Bankman-Fried directed him to insert code into FTX’s operations so that Alameda Research could make unlimited withdrawals from FTX and have a credit line of up to $65 billion. Wang said the money came from customers.

Nishad Singh, the former head of engineering at FTX, testified that he felt “blindsided and horrified” at the result of the actions of a man he once admired when he saw the extent of the fraud as the collapse last November left him suicidal.

Ellison, Wang and Singh all pleaded guilty to fraud charges and testified against Bankman-Fried in the hopes of leniency at sentencing.

Bankman-Fried was arrested in the Bahamas in December and extradited to the United States, where he was freed on a $250 million personal recognizance bond with electronic monitoring and a requirement that he remain at the home of his parents in Palo Alto, California.

His communications, including hundreds of phone calls with journalists and internet influencers, along with emails and texts, eventually got him into trouble when the judge concluded he was trying to influence prospective trial witnesses and ordered him jailed in August.

During the trial, prosecutors used Bankman-Fried’s public statements, online announcements and his congressional testimony against him, showing how the entrepreneur repeatedly promised customers that their deposits were safe and secure as late as last Nov. 7 when he tweeted, “FTX is fine. Assets are fine” as customers furiously tried to withdraw their money. He deleted the tweet the next day. FTX filed for bankruptcy four days later.

In his closing, Roos mocked Bankman-Fried’s testimony, saying that under questioning from his lawyer, the defendant’s words were “smooth, like it had been rehearsed a bunch of times?”

But under cross examination, “he was a different person,” the prosecutor said. “Suddenly on cross-examination he couldn’t remember a single detail about his company or what he said publicly. It was uncomfortable to hear. He never said he couldn’t recall during his direct examination, but it happened over 140 times during his cross-examination.”

Former federal prosecutors said the quick verdict — after only half a day of deliberation — showed how well the government tried the case.

“The government tried the case as we expected,” said Joshua A. Naftalis, a partner at Pallas Partners LLP and a former Manhattan prosecutor. “It was a massive fraud, but that doesn’t mean it had to be a complicated fraud, and I think the jury understood that argument.”

World Leaders Agree on Artificial Intelligence Risks

World leaders have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence, at a U.K.-hosted safety conference.

The inaugural AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday, with senior officials from 28 nations, including the United States and China, agreeing to work toward a “shared agreement and responsibility” about AI risks. Plans are in place for further meetings later this year in South Korea and France.

Leaders, including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General Antonio Guterres, discussed each of their individual testing models to ensure the safe growth of AI.

Thursday’s session included focused conversations among what the U.K. called a small group of countries “with shared values.” The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia.

Some leaders, including Sunak, said immediate sweeping regulation is not the way forward, reflecting the view of some AI companies that fear excessive regulation could thwart the technology before it can reach its full potential.

At at a press conference on Thursday, Sunak announced another landmark agreement by countries pledging to “work together on testing the safety of new AI models before they are released.”

The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks.

The summit will conclude with a conversation between Sunak and billionaire Elon Musk. Musk on Wednesday told fellow attendees that legislation on AI could pose risks, and that the best steps forward would be for governments to work to understand AI fully to harness the technology for its positive uses, including uncovering problems that can be brought to the attention of lawmakers.

Some information in this report was taken from The Associated Press and Reuters.

India Probing Phone Hacking Complaints by Opposition Politicians, Minister Says

India’s cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said.

Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that “Apple confirmed it has received the notice for investigation.”

A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized.

There was no immediate comment from Apple about the investigation.

This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi’s government of trying to hack into opposition politicians’ mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: “Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID.”

A senior minister from Modi’s government also said he had received the same notification on his phone.

Apple said it did not attribute the threat notifications to “any specific state-sponsored attacker,” adding that “it’s possible that some Apple threat notifications may be false alarms, or that some attacks are not detected.”

In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi.

The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.

US Pushes for Global Protections for Threats Posed by AI

U.S. Vice President Kamala Harris says leaders have “a moral, ethical and societal duty” to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.

US Pushes for Global Protections Against Threats Posed by AI

U.S. Vice President Kamala Harris said Wednesday that leaders have “a moral, ethical and societal duty” to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap.

Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art.

“To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations,” Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms.”

Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications.

Just days earlier, President Joe Biden – who described AI as “the most consequential technology of our time” – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government.

AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve “initial operational capability.”

And perhaps on the opposite end of the spectrum, some programmer decided to “train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds.”

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as “one of the biggest threats” to society. He called for a “third-party referee.”

Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

“Here we are, for the first time, really in human history, with something that’s going to be far more intelligent than us,” said Musk, who is looking at creating his own generative AI program. “So it’s not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that’s beneficial to humanity. But I do think it’s one of the existential risks that we face and it’s potentially the most pressing one.”

This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year.

“My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways,” he told lawmakers at a Senate Judiciary Committee on May 16.

That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while “AI has been used to do pretty remarkable things” – especially in the field of scientific research – it is limited by its creators.

“It’s not necessarily doing something that humans don’t know how to do, but it’s making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly,” she told VOA on Zoom.

And, she said, “AI is not objective, or all-knowing. There’s been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns.”

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: “The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use.

“It’s OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally,” Brandt said.

Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI.

“The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions,” said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. “Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety.”