Category Archives: Technology

Silicon valley & technology news. Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life

How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

App Taken Down After Pittsburgh Gunman Revealed as User

Gab, a social networking site often accused of being a haven for white supremacists, neo-Nazis and other hate groups, went offline Monday after being refused by several web hosting providers following revelations that Pittsburgh synagogue shooting suspect Robert Bowers used the platform to threaten Jews.

“Gab isn’t going anywhere,” said Andrew Torba, chief executive officer and creator of Gab.com. “We will exercise every possible avenue to keep Gab online and defend free speech and individual liberty for all people.

Founded two years ago as an alternative to mainstream social networking sites like Facebook and Twitter, Torba billed Gab as a haven for free speech. The site soon began attracting online members of the alt-right and other extremist ideologies unwelcome on other platforms.

“What makes the entirely left-leaning Big Social monopoly qualified to tell us what is ‘news’ and what is ‘trending’ and to define what “harassment” means?” Torba wrote in a 2016 email to Buzzfeed News.

The tide swiftly turned against Gab after Bowers entered the Tree of Life synagogue Saturday morning with an assault rifle and several handguns, killing 11 and wounding six.

It came to light that Bowers had made several anti-Semitic posts on the site, including one the morning of the shooting that read “HIAS likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” HIAS (Hebrew Immigration Aid Society) helps refugees resettle in the United States.

Following Bowers’ posts being picked up by national media, PayPal and payment processor Stripe announced that they would be ending their relationship with Gab. Hosting providers followed soon after, and the website was nonfunctional by Monday morning.

In an interview with NPR aired Monday, Torba defended leaving up Bowers’ post from the morning of the shooting.

“Do you see a direct threat in there?” Torba said. “Because I don’t. What would you expect us to do with a post like that? You want us to just censor anybody who says the phrase ‘I’m going in’? Because that’s just absurd.”

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Facebook Removes 82 Iranian-Linked Accounts

Facebook announced Friday that it has removed 82 accounts, pages or groups from its site and Instagram that originated in Iran, with some of the account owners posing as residents of the United States or Britain and tweeting about liberal politics.

At least one of the Facebook pages had more than one million followers, the firm said. The company said it did not know if the coordinated behavior was tied to the Iranian government. Less than $100 in advertising on Facebook and Instagram was spent to amplify the posts, the firm said.

The company said in a post titled “Taking Down Coordinated Inauthentic Behavior from Iran” that some of the accounts and pages were tied to ones taken down in August.

“Today we removed multiple pages, groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram,” the firm said. “This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing.”

Monitoring online activity

Facebook says it has ramped up its monitoring of the authenticity of accounts in the runup to the U.S. midterm election, with more than 20,000 people working on safety and security. The social media firm says it has created an election “war room” on the campus to monitor behavior it deems “inauthentic.”

Nathaniel Gleicher, head of cybersecurity policy for Facebook, said that the behavior was coordinated and originated in Iran.

The posts appeared as if they were being made by citizens in the United States and in a few cases, in Britain. The posts were of “politically charged topics such as race relations, opposition to the president, and immigration.”

In terms of the reach of the posts, “about 1.02 million accounts followed at least one of these Pages, about 25,000 accounts joined at least one of these groups, and more than 28,000 accounts followed at least one of these Instagram accounts.”

A more advanced approach

The company released some images related to the accounts. 

An analysis of 10 Facebook pages and 14 Instagram accounts by the Atlantic Council’s Digital Forensic Research Lab concluded the pages and accounts were newer, and more advanced, than another batch of Iranian-linked pages and accounts that were removed in August.

“These assets were designed to engage in, rather than around, the political dialogue,” the lab’s Ben Nimmo and Graham Brookie wrote. “Their behavior showed how much they had adapted from earlier operations, focusing more on social media than third party websites.”

And those behind the accounts appeared to have learned a lesson from Russia’s ongoing influence campaign.

“One main aim of the Iranian group of accounts was to inflame America’s partisan divides,” the analysis said. “The tone of the comments added to the posts suggests that this had some success.”

Targeting U.S. midterm voters

Some of the accounts and pages directly targeted the upcoming U.S. elections, showing individuals talking about how they voted or calling on others to vote.

Most were aimed at a liberal audience.

“Proud to say that my first ever vote was for @BetoORourke,” said one post from an account called “No racism no war,” which had 412,000 likes and about half a million followers.

“Get your ass out and VOTE!!! Do your part,” said another post shared by the same account.

U.S. intelligence and national security officials have repeatedly warned of efforts by countries like Iran and China, in addition to Russia, to influence and interfere with U.S. elections next month and in 2020.

Democratic Representative Adam Schiff, who is a ranking member of the House Intelligence Committee, said Facebook’s decision to pull down the questionable pages and accounts and share the information with the public is critical to “keeping users aware of and inoculated against such foreign influence campaigns.”

“Facebook’s discovery and exposure of additional nefarious Iranian activity on its platforms so close to the midterms is an important reminder that both the public and private sector have a shared responsibility to remain vigilant as foreign entities continue their attempts to influence our political dialogue online,” Schiff said in a statement.

But not all the Iranian material was focused on the U.S. midterm election.

“These accounts masqueraded primarily as American liberals, posting only small amounts of anti-Saudi and anti-Israeli content,” the Digital Forensic Research Lab said.

A number of posts also took aim at U.S. policy in the Middle East in general. One post by @sut_racism, accused Ivanka Trump of having “the blood of Dead Children on Her Hands.”

Still, the analysts said many of the posts also contained errors that gave away their non-U.S. origins. For example, in one post talking about the deaths of U.S. soldiers in World War II, the account’s authors used a photo of Soviet soldiers.

Michelle Quinn contributed to this report.

UK Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

British regulators slapped Facebook on Thursday with a fine of 500,000 pounds ($644,000) — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal.

At the same time, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities.

Britain’s Information Commissioner Office found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge.

“Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data,” said Elizabeth Denham, the information commissioner. “A company of its size and expertise should have known better and it should have done better.”

The ICO said a subset of the data was later shared with other organizations, including SCL Group, the parent company of political consultancy Cambridge Analytica, which counted U.S. President Donald Trump’s 2016 election campaign among its clients. News that the consultancy had used data from tens of millions of Facebook accounts to profile voters ignited a global scandal on data rights.

The fine amounts to a speck on Facebook’s finances. In the second quarter, the company generated revenue at a rate of nearly $100,000 per minute. That means it will take less than seven minutes for Facebook to bring in enough money to pay for the fine.

But it’s the maximum penalty allowed under the law at the time the breach occurred. Had the scandal taken place after new EU data protection rules went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4 percent of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4 percent of its revenue last year.

The data rules are tougher than the ones in the United States, and a debate is ongoing on how the U.S. should respond. California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

Facebook CEO Mark Zuckerberg said in a video message to a big data privacy conference in Brussels this week that “we have a lot more work to do” to safeguard personal data.

About the U.K. fine, Facebook responded in a statement that it is reviewing the decision.

“While we respectfully disagree with some of their findings, we have said before that we should have done more to investigate claims about Cambridge Analytica and taken action in 2015. We are grateful that the ICO has acknowledged our full cooperation throughout their investigation.”

Facebook also took solace in the fact that the ICO did not definitively assert that U.K. users had their data shared for campaigning. But the commissioner noted in her statement that “even if Facebook’s assertion is correct,” U.S. residents would have used the site while visiting the U.K.

EU lawmakers had summoned Zuckerberg in May to testify about the Cambridge Analytica scandal.

In their vote on Thursday, they said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.”

The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers.

 

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Google Abandons Planned Berlin Office Hub

Campaigners in a bohemian district of Berlin celebrated Wednesday after Internet giant Google abandoned strongly-opposed plans to open a large campus there.

The US firm had planned to set up an incubator for start-up companies in Kreuzberg, one of the older districts in the west of the capital.

But the company’s German spokesman Ralf Bremer announced Wednesday that the 3,000 square-metre (3,590 square-yard) space — planned to host offices, cafes and communal work areas, would instead go to two local humanitarian associations.

Bremer did not say if local resistance to the plans over the past two years had played a part in the change of heart, although he had told the Berliner Zeitung daily that Google does not allow protests dictate its actions.

“The struggle pays off,” tweeted “GloReiche Nachbarschaft”, one of the groups opposed to the Kreuzberg campus plan and part of the “F**k off Google” campaign.

Some campaigners objected to what they described as Google’s “evil” corporate practices, such as tax evasion and the unethical use of personal data.

Some opposed the gentrification of the district, pricing too many people out of the area.

A recent study carried out by the Knight Fox consultancy concluded that property prices are rising faster in Berlin than anywhere else in the world: they jumped 20.5 percent between 2016 and 2017.

In Kreuzberg over the same period, the rise was an astonishing 71 percent.

Kreuzberg, which straddled the Berlin Wall that divided East and West Berlin during the Cold War, has traditionally been a bastion of the city’s underground and radical culture.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.

Apple CEO Backs Privacy Laws, Warns Data Being ‘Weaponized’

The head of Apple on Wednesday endorsed tough privacy laws for both Europe and the U.S. and renewed the technology giant’s commitment to protecting personal data, which he warned was being “weaponized” against users.

 

Speaking at an international conference on data privacy, Apple CEO Tim Cook applauded European Union authorities for bringing in a strict new data privacy law this year and said the iPhone maker supports a U.S. federal privacy law.

 

Cook’s remarks, along with comments due later from Google and Facebook top bosses, in the European Union’s home base in Brussels, underscore how the U.S. tech giants are jostling to curry favor in the region as regulators tighten their scrutiny.

 

Data protection has become a major political issue worldwide, and European regulators have led the charge in setting new rules for the big internet companies. The EU’s new General Data Protection Regulation, or GDPR, requires companies to change the way they do business in the region, and a number of headline-grabbing data breaches have raised public awareness of the issue.

 

“In many jurisdictions, regulators are asking tough questions. It is time for rest of the world, including my home country, to follow your lead,” Cook said.

 

“We at Apple are in full support of a comprehensive federal privacy law in the United States,” he said, to applause from hundreds of privacy officials from more than 70 countries.

 

In the U.S., California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

 

Cook warned that technology’s promise to drive breakthroughs that benefit humanity is at risk of being overshadowed by the harm it can cause by deepening division and spreading false information. He said the trade in personal information “has exploded into a data industrial complex.”

 

“Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency,” he said. Scraps of personal data are collected for digital profiles that let businesses know users better than they know themselves and allow companies to offer users increasingly extreme content that hardens their convictions,” Cook said.

 

“This is surveillance. And these stockpiles of personal data serve only to enrich only the companies that collect them,” he said.

 

Cook’s appearance seems set to one-up his tech rivals and show off his company’s credentials in data privacy, which has become a weak point for both Facebook and Google.

 

With the spotlight shining as directly as it is, Apple have the opportunity to show that they are the leading player and they are taking up the mantle,'' said Ben Robson, a lawyer at Oury Clark specializing in data privacy. Cook's appearanceis going to have good currency,” with officials, he added.

 

Facebook CEO Mark Zuckerberg and Google head Sundar Pichai were scheduled to address by video the annual meeting of global data privacy chiefs. Only Cook attended in person.

 

He has repeatedly said privacy is a “fundamental human right” and vowed his company wouldn’t sell ads based on customer data the way companies like Facebook do.

 

His speech comes a week after the iPhone maker unveiled expanded privacy protection measures for people in the U.S., Canada, Australia and New Zealand, including allowing them to download all personal data held by Apple. European users already had access to this feature after GDPR took effect in May. Apple plans to expand it worldwide.

 

The International Conference of Data Protection and Privacy Commissioners, held in a different city every year, normally attracts little attention but its Brussels venue this year takes on symbolic meaning as EU officials ratchet up their tech regulation efforts.

 

The 28-nation EU took on global leadership of the issue when it beefed up data privacy regulations by launching GDPR. The new rules require companies to justify the collection and use of personal data gleaned from phones, apps and visited websites. They must also give EU users the ability to access and delete data, and to object to data use.

 

GDPR also allows for big fines benchmarked to revenue, which for big tech companies could amount to billions of dollars.

 

In the first big test of the new rules, Ireland’s data protection commission, which is a lead authority for Europe as many big tech firms are based in the country, is investigating Facebook after a data breach let hackers access 3 million EU accounts.

 

Google, meanwhile, shut down its Plus social network this month after revealing it had a flaw that could have exposed personal information of up to half a million people.

 

 

 

Hi-tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

US Tech Companies Reconsider Saudi Investment

The controversy over the death of Saudi Arabian journalist Jamal Khashoggi has shined a harsh light on the growing financial ties between Silicon Valley and the world’s largest oil exporter.

As Saudi Arabia’s annual investment forum in Riyadh — dubbed “Davos in the Desert” — continues, representatives from many of the kingdom’s highest-profile overseas tech investments are not attending, joining other international business leaders in shunning a conference amid lingering questions over what role the Saudi government played in the killing of a journalist inside their consulate in Turkey.

Tech leaders such as Steve Case, the co-founder of AOL, and Dara Khosrowshahi, the chief executive of Uber, declined to attend this week’s annual investment forum in Riyadh. Even the CEO of Softbank, which has received billions of dollars from Saudi Arabia to back technology companies, reportedly has canceled his planned speech at the event.

But the Saudi controversy is focusing more scrutiny on the ethics of taking money from an investor who is accused of wrongdoing or whose track record is questionable.

Fueling the tech race

In the tech startup world, Saudi investment has played a key role in allowing firms to delay going public for years while they pursue a high-growth strategy without worrying about profitability. Those ties have only grown with the ascendancy of Crown Prince Mohammed bin Salman, the son of the Saudi king.

The kingdom’s Public Investment Fund has put $3.5 billion into Uber and has a seat on Uber’s 12-member board. Saudi Arabia also has invested more than $1 billion into Lucid Motors, a California electric car startup, and $400 million in Magic Leap, an augmented reality startup based in Florida.

Almost half of the Japanese Softbank’s $93 billion Vision Fund came from the Saudi government. The Vision Fund has invested in a Who’s Who list of tech startups, including WeWork, Wag, DoorDash and Slack. 

Now there are reports that as the cloud hangs over the crown prince, Softbank’s plan for a second Vision fund may be on hold. And Saudi money might have trouble finding a home in the future in Silicon Valley, where companies are competing for talented workers, as well as customers.

The tech industry is not alone in questioning its relationship with the Saudi government in the wake of Khashoggi’s death or appearing to rethink its Saudi investments. Museums, universities and other business sectors that have benefited financially from their connections to the Saudis also are taking a harder look at those relationships.

Who are my investors?

Saudi money plays a large role in Silicon Valley, touching everything from ride-hailing firms to business-messaging startups, but it is not the only foreign investment in the region.

More than 20 Silicon Valley venture companies have ties to Chinese government funding, according to Reuters, with the cash fueling tech startups. The Beijing-backed funds have raised concerns that strategically important technology, such as artificial intelligence, is being transferred to China.

And Kremlin money has backed a prominent Russian venture capitalist in the Valley who has invested in Twitter and Facebook.

The Saudi controversy has prompted some in the Valley to question their investors about where those investors are getting their funding. Fred Wilson, a prominent tech venture capitalist, received just such an inquiry.

“I expect to get more emails like this in the coming weeks as the start-up and venture community comes to grip with the flood of money from bad actors that has found its way into the start-up/tech sector over the last decade,” he wrote in a blog post titled “Who Are My Investors?”

“Bad actors’ doesn’t simply mean money from rulers in the gulf who turn out to be cold blooded killers,” Wilson wrote. “It also means money from regions where dictators rule viciously and restrict freedom.” 

This may be a defining ethical moment in Silicon Valley, as it moves away from its libertarian roots to seeing the world in its complexity, said Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University.

“Corporate leaders are moving more quickly and decisively than the administration, and they realize they have a couple of hats here — one, they are the chief strategist of their organization, and they also play the role of the responsible person who creates space for the right conversations to happen,” she said.

Tech’s evolving ethics

Responding to demands from their employees and customers, Silicon Valley firms are looking more seriously at business ethics and taking moral stands.

In the case of Google, it meant discontinuing a U.S. Defense Department contract involving artificial intelligence. In the case of WeWork, the firm now forbids the consumption of meat at the office or purchased with company expenses, on environmental grounds.

The Vision Fund will “undoubtedly find itself in a more challenging environment in convincing startups to take its money,” Amir Anvarzadeh, a senior strategist at Asymmetric Advisors in Singapore, recently told Bloomberg. 

An Avatar Is Going to Help Police Guard European Borders

A new artificial intelligence program could make land borders across Europe more secure. When a pilot program begins next month, an avatar – called i-Border-Control – will help police guard several border crossings within the 26-nation, European Schengen Area. The technology was introduced this weekend (October 20) at a science festival hosted by Manchester Metropolitan University. VOA’s Mariama Diallo reports.

US Regulator Orders Halt to Self-Driving School Bus Test in Florida

The National Highway Traffic Safety Administration on Monday said it had ordered Transdev North America to immediately stop transporting schoolchildren in Florida in a driverless shuttle as the testing could be putting them at “inappropriate” risk.

The auto safety agency known as NHTSA said in an order issued late Friday that Transdev’s use of its EZ10 Generation II driverless shuttle in the Babcock Ranch community in southwest Florida was “unlawful and in violation of the company’s temporary importation authorization.”

“Innovation must not come at the risk of public safety,” said Deputy NHTSA Administrator Heidi King in a statement.

“Using a non-compliant test vehicle to transport children is irresponsible, inappropriate, and in direct violation of the terms of Transdev’s approved test project.”

In March, NHTSA granted Transdev permission to temporarily import the driverless shuttle for testing and demonstration purposes, but not as a school bus.

The agency said the company had agreed to halt the tests. A spokeswoman for Transdev did not respond to several requests for comment Monday.

Transdev North America is a unit of Transdev, which is controlled by France state-owned investment fund Caisse des Depots et Consignations.

The company in August issued a news release saying it would “operate school shuttle service starting this fall with an autonomous vehicle, the first in the world.”

Transdev said the 12-person shuttle bus would operate from a designated pickup area with a safety attendant on board, would travel at a top speed of 8 miles per hour (13 kph), with the potential to reach speeds of 30 mph (48 kph) once additional infrastructure was completed.

There are numerous low-speed self-driving shuttles being tested in cities around the United States with many others planned.

NHTSA previously said it was moving ahead with plans to revise safety rules that bar fully self-driving cars from the roads without equipment such as steering wheels, pedals and mirrors as the agency works to advance driverless vehicles. The agency has said it opposes proposals to require ‘pre-approving’ self-driving technologies before they are tested.

NHTSA told Transdev that failure to take appropriate action could result in fines, the voiding of the temporary importation authorization or the exportation of the vehicle.

Earlier this month, French utility Veolia agreed to sell its 30 percent stake in Transdev to Germany’s Rethmann Group.

Individual Cooling Units Could Save Lives

The World Health Organization is closely watching the Ebola outbreak in Congo where the number of cases has risen to 185 since the outbreak started in August. One of the challenges for health workers fighting highly infectious diseases like Ebola is spending time in HazMat suits. They can be unwieldy and incredibly hot, but new technology could solve one of those problems. VOA’s Kevin Enochs reports.

Using Tech to Save World’s Most Endangered Species in Tanzania

In Tanzania, protecting endangered animals has become easier thanks to Earth Ranger. Earth Ranger is not a superhero, it’s a technology platform, developed by Vulcan Inc., a company co-founded by U.S. philanthropist and Microsoft co-founder Paul Allen. The system helps rangers remotely monitor elephants and other animals to stay ahead of poachers. Faiza Elmasry has the story. VOA’s Faith Lapidus narrates.

Financial Watchdog: Regulate Cryptocurrencies Now, Or Else

A global financial body says governments worldwide must establish rules for virtual currencies like bitcoin to stop criminals from using them to launder money or finance terrorism.

The Financial Action Task Force said Friday that from next year it will start assessing whether countries are doing enough to fight criminal use of virtual currencies.

Countries that don’t could risk being effectively put on a “gray list” by the FATF, which can scare away investors.

Marshall Billingslea, an assistant U.S. Treasury secretary who holds the FATF’s rotating leadership, said, “We’ve made clear today that every jurisdiction must establish” virtual currency rules. “It’s no longer optional.”

The FATF described how the Islamic State group and al-Qaida have used virtual currencies.

Financial regulators worldwide have struggled to deal with the rise of electronic alternatives to traditional money.

Former Deputy UK Leader Nick Clegg Takes Post with Facebook

Facebook has hired former U.K. deputy prime minister Nick Clegg to head its global policy and communications teams, enlisting a veteran of European Union politics to help it with increased regulatory scrutiny in the region.

Clegg, 51, will become a vice president of the social media giant, and report to Chief Operating Officer Sheryl Sandberg.

Clegg will be called upon to help Facebook and other Silicon Valley stalwarts grapple with a changing regulatory landscape globally. European Union regulators are interested in reining in mostly American tech giants who they blame for avoiding tax, stifling competition and encroaching on privacy rights.

Clegg led the Liberal Democrats from 2007 to 2015, including five years in the coalition government with the Conservatives. He lost his Sheffield Hallam seat at last year’s general election.

French Startup Offers Visions of Damaged Middle Eastern Cities

The Syrian government says the ancient city of Palmyra, gravely damaged by IS militants, could reopen to the public next spring. But, while restoration continues on the ground, one French startup is showing people how Palmyra and other cities affected by war once looked, how they look now, and how they might look after restoration. Kevin Enochs explains.

Twitter Releases Tweets Showing Russian, Iranian Attempts to Influence US Politics

On Wednesday, Twitter released a collection of more than 10 million tweets related to thousands of accounts affiliated with Russia’s Internet Research Agency propaganda organization, as well as hundreds more troll accounts, including many based in Iran.

The data, analyzed and released in a report by The Atlantic Council’s Digital Forensic Research Lab, are made up of 3,841 accounts affiliated with the Russia-based Internet Research Agency, 770 other accounts potentially based in Iran as well as 10 million tweets and more than 2 million images, videos and other media.

Russian trolls targeting U.S. politics took on personas from both the left and the right. Their primary goal appears to have been to sow discord, rather than promote any particular side, presumably with a goal of weakening the United States, the report said.

DFRlab says the Russian trolls were often effective, drawing tens of thousands of retweets on certain posts including from celebrity commentators like conservative Ann Coulter.

​Some of the tweets posted:

“Judgement Day is here. Please vote #TrumpPence16 to save our great nation from destruction! #draintheswamp #TrumpForPresident,” said a fake Election Day tweet in 2016.

“Daily reminder: Trump still hasn’t imposed sanctions on Russia that were passed 4,193 in the House and 982 in the Senate. Shouldn’t that be grounds for impeachment?” said another tweet in March of this year.

Multiple goals

The Russian operation had multiple goals, including interfering in the U.S. presidential election, polarizing online communities, and weakening trust in American institutions, according to the DFRLab.

“The thing to understand is that the Russians were equal opportunity partisans,” Graham Brookie, one of the researchers behind the analysis, told VOA News. “There was a very specific focus on specific ideological communities and specific demographics.”

Following an initial push to prevent Hillary Clinton from being elected in 2016, the analysis identified a “second wave” of fake accounts, many of which were focused on infiltrating anti-Trump groups, especially those identified with the “Resistance” movement, exploiting sensitive issues such as race relations and gun violence. These often achieved greater impact than their conservative counterparts.

“Don’t ever tell me kneeling for the flag is disrespectful to our troops when Trump calls a sitting Senator “Pocahontas” in front of Native American war heroes,” tweeted an account posing as an African-American woman named “Luisa Haynes” under the handle @wokeluisa in November 2017. The tweet garnered more than 32,000 retweets and over 89,000 likes.

“They tried to inflame everybody, regardless of race, creed, politics or sexual orientation,” the Lab noted in its analysis. “On many occasions, they pushed both sides of divisive issues.”

Iran trolling

Iran’s trolling was primarily focused on promoting its own interests, including attacking regional rivals like Israel and Saudi Arabia.

However, Iran’s trolling was less effective than the Russian posts, with most tweets getting limited responses.

This was partially because of posting styles that were less inflammatory, according to the report.

“Few of the accounts showed distinctive personalities: They largely shared online articles,” according to the report. “As such, they were a poor fit for Twitter, where personal comment tends to resonate more strongly than website shares.” Generally, many troll posts were ineffective, and “their operations were washed away in the firehose of Twitter.”

All of the accounts linked to the massive trove of tweets released by Twitter have been suspended or deleted, and the analysis notes that overall activity from suspected Russian trolls fell this year after Twitter clampdowns in September and June 2017.

But, that does not mean political trolls do not still pose a threat.

“Identifying future foreign influence operations, and reducing their impact, will demand awareness and resilience from the activist communities targeted, not just the platforms and the open source community,” according to the report.

Twitter Releases Tweets Showing Foreign Attempts to Influence US Politics

Twitter has released a collection of more than 10 million tweets it says are related to foreign efforts to influence U.S. elections going back a decade, including many tied to Russia’s digital efforts to sow chaos and sway the 2016 election in favor of Donald Trump.

Twitter says it made the cache, which includes tweets from Iran and Russia’s state-sponsored troll farm, Internet Research Agency, available so researchers around the world could conduct their own analyses.

The non-partisan Atlantic Council’s Digital Forensic Research Lab has been looking through the collection since last week.  In a preliminary analysis posted on Medium, the online publishing platform, the Lab noted operators from Iran and Russia appeared to have targeted politically polarized groups in order to maximize divisiveness in the United States’ political scene.  

“The Russian trolls were non-partisan: they tried to inflame everybody, regardless of race, creed, politics, or sexual orientation,” the Lab noted, “On many occasions, they pushed both sides of divisive issues.”

Sifting through the collection is no small task.  The entire set, available for public download on Twitter’s news blog, encompasses spreadsheets and archived tweets from 3,841 Russian-linked accounts and 770 Iran-linked accounts.  The downloads add up to more than 450 gigabytes of data.

The micro-blogging company said in its post, “They include more than 10 million tweets and more than two million images, GIFs, videos, and Periscope broadcasts, including the earliest Twitter activity from accounts connected with these campaigns, dating back to 2009….”

Twitter has taken increasing steps to generate public goodwill over its perceived connection to Russian attempts to sway the 2016 election and its role in the spread of fake news.  In January, the company notified about 1.4 million users that they had interacted with Russia-linked accounts during the election or had followed those accounts at the time they were suspended.