All posts by MTechnology

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Facebook Removes 82 Iranian-Linked Accounts

Facebook announced Friday that it has removed 82 accounts, pages or groups from its site and Instagram that originated in Iran, with some of the account owners posing as residents of the United States or Britain and tweeting about liberal politics.

At least one of the Facebook pages had more than one million followers, the firm said. The company said it did not know if the coordinated behavior was tied to the Iranian government. Less than $100 in advertising on Facebook and Instagram was spent to amplify the posts, the firm said.

The company said in a post titled “Taking Down Coordinated Inauthentic Behavior from Iran” that some of the accounts and pages were tied to ones taken down in August.

“Today we removed multiple pages, groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram,” the firm said. “This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing.”

Monitoring online activity

Facebook says it has ramped up its monitoring of the authenticity of accounts in the runup to the U.S. midterm election, with more than 20,000 people working on safety and security. The social media firm says it has created an election “war room” on the campus to monitor behavior it deems “inauthentic.”

Nathaniel Gleicher, head of cybersecurity policy for Facebook, said that the behavior was coordinated and originated in Iran.

The posts appeared as if they were being made by citizens in the United States and in a few cases, in Britain. The posts were of “politically charged topics such as race relations, opposition to the president, and immigration.”

In terms of the reach of the posts, “about 1.02 million accounts followed at least one of these Pages, about 25,000 accounts joined at least one of these groups, and more than 28,000 accounts followed at least one of these Instagram accounts.”

A more advanced approach

The company released some images related to the accounts. 

An analysis of 10 Facebook pages and 14 Instagram accounts by the Atlantic Council’s Digital Forensic Research Lab concluded the pages and accounts were newer, and more advanced, than another batch of Iranian-linked pages and accounts that were removed in August.

“These assets were designed to engage in, rather than around, the political dialogue,” the lab’s Ben Nimmo and Graham Brookie wrote. “Their behavior showed how much they had adapted from earlier operations, focusing more on social media than third party websites.”

And those behind the accounts appeared to have learned a lesson from Russia’s ongoing influence campaign.

“One main aim of the Iranian group of accounts was to inflame America’s partisan divides,” the analysis said. “The tone of the comments added to the posts suggests that this had some success.”

Targeting U.S. midterm voters

Some of the accounts and pages directly targeted the upcoming U.S. elections, showing individuals talking about how they voted or calling on others to vote.

Most were aimed at a liberal audience.

“Proud to say that my first ever vote was for @BetoORourke,” said one post from an account called “No racism no war,” which had 412,000 likes and about half a million followers.

“Get your ass out and VOTE!!! Do your part,” said another post shared by the same account.

U.S. intelligence and national security officials have repeatedly warned of efforts by countries like Iran and China, in addition to Russia, to influence and interfere with U.S. elections next month and in 2020.

Democratic Representative Adam Schiff, who is a ranking member of the House Intelligence Committee, said Facebook’s decision to pull down the questionable pages and accounts and share the information with the public is critical to “keeping users aware of and inoculated against such foreign influence campaigns.”

“Facebook’s discovery and exposure of additional nefarious Iranian activity on its platforms so close to the midterms is an important reminder that both the public and private sector have a shared responsibility to remain vigilant as foreign entities continue their attempts to influence our political dialogue online,” Schiff said in a statement.

But not all the Iranian material was focused on the U.S. midterm election.

“These accounts masqueraded primarily as American liberals, posting only small amounts of anti-Saudi and anti-Israeli content,” the Digital Forensic Research Lab said.

A number of posts also took aim at U.S. policy in the Middle East in general. One post by @sut_racism, accused Ivanka Trump of having “the blood of Dead Children on Her Hands.”

Still, the analysts said many of the posts also contained errors that gave away their non-U.S. origins. For example, in one post talking about the deaths of U.S. soldiers in World War II, the account’s authors used a photo of Soviet soldiers.

Michelle Quinn contributed to this report.

Facebook Removes 82 Iranian-Linked Accounts

Facebook announced Friday that it has removed 82 accounts, pages or groups from its site and Instagram that originated in Iran, with some of the account owners posing as residents of the United States or Britain and tweeting about liberal politics.

At least one of the Facebook pages had more than one million followers, the firm said. The company said it did not know if the coordinated behavior was tied to the Iranian government. Less than $100 in advertising on Facebook and Instagram was spent to amplify the posts, the firm said.

The company said in a post titled “Taking Down Coordinated Inauthentic Behavior from Iran” that some of the accounts and pages were tied to ones taken down in August.

“Today we removed multiple pages, groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram,” the firm said. “This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing.”

Monitoring online activity

Facebook says it has ramped up its monitoring of the authenticity of accounts in the runup to the U.S. midterm election, with more than 20,000 people working on safety and security. The social media firm says it has created an election “war room” on the campus to monitor behavior it deems “inauthentic.”

Nathaniel Gleicher, head of cybersecurity policy for Facebook, said that the behavior was coordinated and originated in Iran.

The posts appeared as if they were being made by citizens in the United States and in a few cases, in Britain. The posts were of “politically charged topics such as race relations, opposition to the president, and immigration.”

In terms of the reach of the posts, “about 1.02 million accounts followed at least one of these Pages, about 25,000 accounts joined at least one of these groups, and more than 28,000 accounts followed at least one of these Instagram accounts.”

A more advanced approach

The company released some images related to the accounts. 

An analysis of 10 Facebook pages and 14 Instagram accounts by the Atlantic Council’s Digital Forensic Research Lab concluded the pages and accounts were newer, and more advanced, than another batch of Iranian-linked pages and accounts that were removed in August.

“These assets were designed to engage in, rather than around, the political dialogue,” the lab’s Ben Nimmo and Graham Brookie wrote. “Their behavior showed how much they had adapted from earlier operations, focusing more on social media than third party websites.”

And those behind the accounts appeared to have learned a lesson from Russia’s ongoing influence campaign.

“One main aim of the Iranian group of accounts was to inflame America’s partisan divides,” the analysis said. “The tone of the comments added to the posts suggests that this had some success.”

Targeting U.S. midterm voters

Some of the accounts and pages directly targeted the upcoming U.S. elections, showing individuals talking about how they voted or calling on others to vote.

Most were aimed at a liberal audience.

“Proud to say that my first ever vote was for @BetoORourke,” said one post from an account called “No racism no war,” which had 412,000 likes and about half a million followers.

“Get your ass out and VOTE!!! Do your part,” said another post shared by the same account.

U.S. intelligence and national security officials have repeatedly warned of efforts by countries like Iran and China, in addition to Russia, to influence and interfere with U.S. elections next month and in 2020.

Democratic Representative Adam Schiff, who is a ranking member of the House Intelligence Committee, said Facebook’s decision to pull down the questionable pages and accounts and share the information with the public is critical to “keeping users aware of and inoculated against such foreign influence campaigns.”

“Facebook’s discovery and exposure of additional nefarious Iranian activity on its platforms so close to the midterms is an important reminder that both the public and private sector have a shared responsibility to remain vigilant as foreign entities continue their attempts to influence our political dialogue online,” Schiff said in a statement.

But not all the Iranian material was focused on the U.S. midterm election.

“These accounts masqueraded primarily as American liberals, posting only small amounts of anti-Saudi and anti-Israeli content,” the Digital Forensic Research Lab said.

A number of posts also took aim at U.S. policy in the Middle East in general. One post by @sut_racism, accused Ivanka Trump of having “the blood of Dead Children on Her Hands.”

Still, the analysts said many of the posts also contained errors that gave away their non-U.S. origins. For example, in one post talking about the deaths of U.S. soldiers in World War II, the account’s authors used a photo of Soviet soldiers.

Michelle Quinn contributed to this report.

UK Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

British regulators slapped Facebook on Thursday with a fine of 500,000 pounds ($644,000) — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal.

At the same time, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities.

Britain’s Information Commissioner Office found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge.

“Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data,” said Elizabeth Denham, the information commissioner. “A company of its size and expertise should have known better and it should have done better.”

The ICO said a subset of the data was later shared with other organizations, including SCL Group, the parent company of political consultancy Cambridge Analytica, which counted U.S. President Donald Trump’s 2016 election campaign among its clients. News that the consultancy had used data from tens of millions of Facebook accounts to profile voters ignited a global scandal on data rights.

The fine amounts to a speck on Facebook’s finances. In the second quarter, the company generated revenue at a rate of nearly $100,000 per minute. That means it will take less than seven minutes for Facebook to bring in enough money to pay for the fine.

But it’s the maximum penalty allowed under the law at the time the breach occurred. Had the scandal taken place after new EU data protection rules went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4 percent of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4 percent of its revenue last year.

The data rules are tougher than the ones in the United States, and a debate is ongoing on how the U.S. should respond. California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

Facebook CEO Mark Zuckerberg said in a video message to a big data privacy conference in Brussels this week that “we have a lot more work to do” to safeguard personal data.

About the U.K. fine, Facebook responded in a statement that it is reviewing the decision.

“While we respectfully disagree with some of their findings, we have said before that we should have done more to investigate claims about Cambridge Analytica and taken action in 2015. We are grateful that the ICO has acknowledged our full cooperation throughout their investigation.”

Facebook also took solace in the fact that the ICO did not definitively assert that U.K. users had their data shared for campaigning. But the commissioner noted in her statement that “even if Facebook’s assertion is correct,” U.S. residents would have used the site while visiting the U.K.

EU lawmakers had summoned Zuckerberg in May to testify about the Cambridge Analytica scandal.

In their vote on Thursday, they said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.”

The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers.

 

UK Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

British regulators slapped Facebook on Thursday with a fine of 500,000 pounds ($644,000) — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal.

At the same time, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities.

Britain’s Information Commissioner Office found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge.

“Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data,” said Elizabeth Denham, the information commissioner. “A company of its size and expertise should have known better and it should have done better.”

The ICO said a subset of the data was later shared with other organizations, including SCL Group, the parent company of political consultancy Cambridge Analytica, which counted U.S. President Donald Trump’s 2016 election campaign among its clients. News that the consultancy had used data from tens of millions of Facebook accounts to profile voters ignited a global scandal on data rights.

The fine amounts to a speck on Facebook’s finances. In the second quarter, the company generated revenue at a rate of nearly $100,000 per minute. That means it will take less than seven minutes for Facebook to bring in enough money to pay for the fine.

But it’s the maximum penalty allowed under the law at the time the breach occurred. Had the scandal taken place after new EU data protection rules went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4 percent of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4 percent of its revenue last year.

The data rules are tougher than the ones in the United States, and a debate is ongoing on how the U.S. should respond. California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

Facebook CEO Mark Zuckerberg said in a video message to a big data privacy conference in Brussels this week that “we have a lot more work to do” to safeguard personal data.

About the U.K. fine, Facebook responded in a statement that it is reviewing the decision.

“While we respectfully disagree with some of their findings, we have said before that we should have done more to investigate claims about Cambridge Analytica and taken action in 2015. We are grateful that the ICO has acknowledged our full cooperation throughout their investigation.”

Facebook also took solace in the fact that the ICO did not definitively assert that U.K. users had their data shared for campaigning. But the commissioner noted in her statement that “even if Facebook’s assertion is correct,” U.S. residents would have used the site while visiting the U.K.

EU lawmakers had summoned Zuckerberg in May to testify about the Cambridge Analytica scandal.

In their vote on Thursday, they said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.”

The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers.

 

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Hi-Tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

Hi-Tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

Google Abandons Planned Berlin Office Hub

Campaigners in a bohemian district of Berlin celebrated Wednesday after Internet giant Google abandoned strongly-opposed plans to open a large campus there.

The US firm had planned to set up an incubator for start-up companies in Kreuzberg, one of the older districts in the west of the capital.

But the company’s German spokesman Ralf Bremer announced Wednesday that the 3,000 square-metre (3,590 square-yard) space — planned to host offices, cafes and communal work areas, would instead go to two local humanitarian associations.

Bremer did not say if local resistance to the plans over the past two years had played a part in the change of heart, although he had told the Berliner Zeitung daily that Google does not allow protests dictate its actions.

“The struggle pays off,” tweeted “GloReiche Nachbarschaft”, one of the groups opposed to the Kreuzberg campus plan and part of the “F**k off Google” campaign.

Some campaigners objected to what they described as Google’s “evil” corporate practices, such as tax evasion and the unethical use of personal data.

Some opposed the gentrification of the district, pricing too many people out of the area.

A recent study carried out by the Knight Fox consultancy concluded that property prices are rising faster in Berlin than anywhere else in the world: they jumped 20.5 percent between 2016 and 2017.

In Kreuzberg over the same period, the rise was an astonishing 71 percent.

Kreuzberg, which straddled the Berlin Wall that divided East and West Berlin during the Cold War, has traditionally been a bastion of the city’s underground and radical culture.

Google Abandons Planned Berlin Office Hub

Campaigners in a bohemian district of Berlin celebrated Wednesday after Internet giant Google abandoned strongly-opposed plans to open a large campus there.

The US firm had planned to set up an incubator for start-up companies in Kreuzberg, one of the older districts in the west of the capital.

But the company’s German spokesman Ralf Bremer announced Wednesday that the 3,000 square-metre (3,590 square-yard) space — planned to host offices, cafes and communal work areas, would instead go to two local humanitarian associations.

Bremer did not say if local resistance to the plans over the past two years had played a part in the change of heart, although he had told the Berliner Zeitung daily that Google does not allow protests dictate its actions.

“The struggle pays off,” tweeted “GloReiche Nachbarschaft”, one of the groups opposed to the Kreuzberg campus plan and part of the “F**k off Google” campaign.

Some campaigners objected to what they described as Google’s “evil” corporate practices, such as tax evasion and the unethical use of personal data.

Some opposed the gentrification of the district, pricing too many people out of the area.

A recent study carried out by the Knight Fox consultancy concluded that property prices are rising faster in Berlin than anywhere else in the world: they jumped 20.5 percent between 2016 and 2017.

In Kreuzberg over the same period, the rise was an astonishing 71 percent.

Kreuzberg, which straddled the Berlin Wall that divided East and West Berlin during the Cold War, has traditionally been a bastion of the city’s underground and radical culture.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.

Facebook Unveils Systems for Catching Child Nudity, ‘Grooming’ of Children

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue.

Apple CEO Backs Privacy Laws, Warns Data Being ‘Weaponized’

The head of Apple on Wednesday endorsed tough privacy laws for both Europe and the U.S. and renewed the technology giant’s commitment to protecting personal data, which he warned was being “weaponized” against users.

 

Speaking at an international conference on data privacy, Apple CEO Tim Cook applauded European Union authorities for bringing in a strict new data privacy law this year and said the iPhone maker supports a U.S. federal privacy law.

 

Cook’s remarks, along with comments due later from Google and Facebook top bosses, in the European Union’s home base in Brussels, underscore how the U.S. tech giants are jostling to curry favor in the region as regulators tighten their scrutiny.

 

Data protection has become a major political issue worldwide, and European regulators have led the charge in setting new rules for the big internet companies. The EU’s new General Data Protection Regulation, or GDPR, requires companies to change the way they do business in the region, and a number of headline-grabbing data breaches have raised public awareness of the issue.

 

“In many jurisdictions, regulators are asking tough questions. It is time for rest of the world, including my home country, to follow your lead,” Cook said.

 

“We at Apple are in full support of a comprehensive federal privacy law in the United States,” he said, to applause from hundreds of privacy officials from more than 70 countries.

 

In the U.S., California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

 

Cook warned that technology’s promise to drive breakthroughs that benefit humanity is at risk of being overshadowed by the harm it can cause by deepening division and spreading false information. He said the trade in personal information “has exploded into a data industrial complex.”

 

“Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency,” he said. Scraps of personal data are collected for digital profiles that let businesses know users better than they know themselves and allow companies to offer users increasingly extreme content that hardens their convictions,” Cook said.

 

“This is surveillance. And these stockpiles of personal data serve only to enrich only the companies that collect them,” he said.

 

Cook’s appearance seems set to one-up his tech rivals and show off his company’s credentials in data privacy, which has become a weak point for both Facebook and Google.

 

With the spotlight shining as directly as it is, Apple have the opportunity to show that they are the leading player and they are taking up the mantle,'' said Ben Robson, a lawyer at Oury Clark specializing in data privacy. Cook's appearanceis going to have good currency,” with officials, he added.

 

Facebook CEO Mark Zuckerberg and Google head Sundar Pichai were scheduled to address by video the annual meeting of global data privacy chiefs. Only Cook attended in person.

 

He has repeatedly said privacy is a “fundamental human right” and vowed his company wouldn’t sell ads based on customer data the way companies like Facebook do.

 

His speech comes a week after the iPhone maker unveiled expanded privacy protection measures for people in the U.S., Canada, Australia and New Zealand, including allowing them to download all personal data held by Apple. European users already had access to this feature after GDPR took effect in May. Apple plans to expand it worldwide.

 

The International Conference of Data Protection and Privacy Commissioners, held in a different city every year, normally attracts little attention but its Brussels venue this year takes on symbolic meaning as EU officials ratchet up their tech regulation efforts.

 

The 28-nation EU took on global leadership of the issue when it beefed up data privacy regulations by launching GDPR. The new rules require companies to justify the collection and use of personal data gleaned from phones, apps and visited websites. They must also give EU users the ability to access and delete data, and to object to data use.

 

GDPR also allows for big fines benchmarked to revenue, which for big tech companies could amount to billions of dollars.

 

In the first big test of the new rules, Ireland’s data protection commission, which is a lead authority for Europe as many big tech firms are based in the country, is investigating Facebook after a data breach let hackers access 3 million EU accounts.

 

Google, meanwhile, shut down its Plus social network this month after revealing it had a flaw that could have exposed personal information of up to half a million people.

 

 

 

Hi-tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

Hi-tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.

US Tech Companies Reconsider Saudi Investment

The controversy over the death of Saudi Arabian journalist Jamal Khashoggi has shined a harsh light on the growing financial ties between Silicon Valley and the world’s largest oil exporter.

As Saudi Arabia’s annual investment forum in Riyadh — dubbed “Davos in the Desert” — continues, representatives from many of the kingdom’s highest-profile overseas tech investments are not attending, joining other international business leaders in shunning a conference amid lingering questions over what role the Saudi government played in the killing of a journalist inside their consulate in Turkey.

Tech leaders such as Steve Case, the co-founder of AOL, and Dara Khosrowshahi, the chief executive of Uber, declined to attend this week’s annual investment forum in Riyadh. Even the CEO of Softbank, which has received billions of dollars from Saudi Arabia to back technology companies, reportedly has canceled his planned speech at the event.

But the Saudi controversy is focusing more scrutiny on the ethics of taking money from an investor who is accused of wrongdoing or whose track record is questionable.

Fueling the tech race

In the tech startup world, Saudi investment has played a key role in allowing firms to delay going public for years while they pursue a high-growth strategy without worrying about profitability. Those ties have only grown with the ascendancy of Crown Prince Mohammed bin Salman, the son of the Saudi king.

The kingdom’s Public Investment Fund has put $3.5 billion into Uber and has a seat on Uber’s 12-member board. Saudi Arabia also has invested more than $1 billion into Lucid Motors, a California electric car startup, and $400 million in Magic Leap, an augmented reality startup based in Florida.

Almost half of the Japanese Softbank’s $93 billion Vision Fund came from the Saudi government. The Vision Fund has invested in a Who’s Who list of tech startups, including WeWork, Wag, DoorDash and Slack. 

Now there are reports that as the cloud hangs over the crown prince, Softbank’s plan for a second Vision fund may be on hold. And Saudi money might have trouble finding a home in the future in Silicon Valley, where companies are competing for talented workers, as well as customers.

The tech industry is not alone in questioning its relationship with the Saudi government in the wake of Khashoggi’s death or appearing to rethink its Saudi investments. Museums, universities and other business sectors that have benefited financially from their connections to the Saudis also are taking a harder look at those relationships.

Who are my investors?

Saudi money plays a large role in Silicon Valley, touching everything from ride-hailing firms to business-messaging startups, but it is not the only foreign investment in the region.

More than 20 Silicon Valley venture companies have ties to Chinese government funding, according to Reuters, with the cash fueling tech startups. The Beijing-backed funds have raised concerns that strategically important technology, such as artificial intelligence, is being transferred to China.

And Kremlin money has backed a prominent Russian venture capitalist in the Valley who has invested in Twitter and Facebook.

The Saudi controversy has prompted some in the Valley to question their investors about where those investors are getting their funding. Fred Wilson, a prominent tech venture capitalist, received just such an inquiry.

“I expect to get more emails like this in the coming weeks as the start-up and venture community comes to grip with the flood of money from bad actors that has found its way into the start-up/tech sector over the last decade,” he wrote in a blog post titled “Who Are My Investors?”

“Bad actors’ doesn’t simply mean money from rulers in the gulf who turn out to be cold blooded killers,” Wilson wrote. “It also means money from regions where dictators rule viciously and restrict freedom.” 

This may be a defining ethical moment in Silicon Valley, as it moves away from its libertarian roots to seeing the world in its complexity, said Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics at Santa Clara University.

“Corporate leaders are moving more quickly and decisively than the administration, and they realize they have a couple of hats here — one, they are the chief strategist of their organization, and they also play the role of the responsible person who creates space for the right conversations to happen,” she said.

Tech’s evolving ethics

Responding to demands from their employees and customers, Silicon Valley firms are looking more seriously at business ethics and taking moral stands.

In the case of Google, it meant discontinuing a U.S. Defense Department contract involving artificial intelligence. In the case of WeWork, the firm now forbids the consumption of meat at the office or purchased with company expenses, on environmental grounds.

The Vision Fund will “undoubtedly find itself in a more challenging environment in convincing startups to take its money,” Amir Anvarzadeh, a senior strategist at Asymmetric Advisors in Singapore, recently told Bloomberg.