Category Archives: Technology

Silicon valley & technology news. Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life

Electronics Flexes Into the Future

Advancements in digital printing are leading to more sophisticated, flexible electronics capable of changing the way we live and the way we use technology. Reporter Deana Mitchell takes a look at the latest technological innovations at a research center in San Jose, California.

Electronics Flexes Into the Future

Advancements in digital printing are leading to more sophisticated, flexible electronics capable of changing the way we live and the way we use technology. Reporter Deana Mitchell takes a look at the latest technological innovations at a research center in San Jose, California.

China Steps Up VPN Blocks Ahead of Major Trade, Internet Shows

Chinese authorities have stepped up efforts to block virtual private networks (VPN), service providers said Tuesday in describing a “cat-and-mouse” game with censors ahead of a major trade expo and internet conference.

VPNs allow internet users in China, including foreign companies, to access overseas sites that authorities bar through the so-called Great Firewall, such as Facebook Inc and Alphabet Inc’s Google.

Since Xi Jinping became president in 2013, authorities have sought to curb VPN use, with providers suffering periodic lags in connectivity because of government blocks.

“This time, the Chinese government seemed to have staff on the ground monitoring our response in real time and deploying additional blocks,” said Sunday Yokubaitis, the chief executive of Golden Frog, the maker of the VyprVPN service.

Authorities started blocking some of its services on Sunday, he told Reuters, although VyprVPN’s service has since been restored in China.

“Our counter measures usually work for a couple of days before the attack profile changes and they block us again,” Yokubaitis said.

The latest attacks were more aggressive than the “steadily increasing blocks” the firm had experienced in the second half of the year, he added.

The Cyberspace Administration of China did not respond immediately to a faxed request from Reuters to seek comment.

Another provider, ExpressVPN, also acknowledged connectivity issues on its services in China on Monday that sparked user complaints.

“There has long been a cat-and-mouse game with VPNs in China and censors regularly change their blocking techniques,” its spokesman told Reuters.

Last year, Apple Inc dropped a number of unapproved VPN apps from its app store in China, after Beijing adopted tighter rules.

Although fears of a blanket block on services have not materialized, industry experts say VPN connections often face outages around the time of major events in China.

Xi will attend a huge trade fair in Shanghai next week designed to promote China as a global importer and calm foreign concern about its trade practices, while the eastern town of Wuzhen hosts the annual World Internet Conference to showcase China’s vision for internet governance.

Censors may be testing new technology that blocks VPNs more effectively, said Lokman Tsui, who studies freedom of expression and digital rights at the Chinese University of Hong Kong.

“It could be just a wave of experiments,” he said of the latest service disruptions.

China Steps Up VPN Blocks Ahead of Major Trade, Internet Shows

Chinese authorities have stepped up efforts to block virtual private networks (VPN), service providers said Tuesday in describing a “cat-and-mouse” game with censors ahead of a major trade expo and internet conference.

VPNs allow internet users in China, including foreign companies, to access overseas sites that authorities bar through the so-called Great Firewall, such as Facebook Inc and Alphabet Inc’s Google.

Since Xi Jinping became president in 2013, authorities have sought to curb VPN use, with providers suffering periodic lags in connectivity because of government blocks.

“This time, the Chinese government seemed to have staff on the ground monitoring our response in real time and deploying additional blocks,” said Sunday Yokubaitis, the chief executive of Golden Frog, the maker of the VyprVPN service.

Authorities started blocking some of its services on Sunday, he told Reuters, although VyprVPN’s service has since been restored in China.

“Our counter measures usually work for a couple of days before the attack profile changes and they block us again,” Yokubaitis said.

The latest attacks were more aggressive than the “steadily increasing blocks” the firm had experienced in the second half of the year, he added.

The Cyberspace Administration of China did not respond immediately to a faxed request from Reuters to seek comment.

Another provider, ExpressVPN, also acknowledged connectivity issues on its services in China on Monday that sparked user complaints.

“There has long been a cat-and-mouse game with VPNs in China and censors regularly change their blocking techniques,” its spokesman told Reuters.

Last year, Apple Inc dropped a number of unapproved VPN apps from its app store in China, after Beijing adopted tighter rules.

Although fears of a blanket block on services have not materialized, industry experts say VPN connections often face outages around the time of major events in China.

Xi will attend a huge trade fair in Shanghai next week designed to promote China as a global importer and calm foreign concern about its trade practices, while the eastern town of Wuzhen hosts the annual World Internet Conference to showcase China’s vision for internet governance.

Censors may be testing new technology that blocks VPNs more effectively, said Lokman Tsui, who studies freedom of expression and digital rights at the Chinese University of Hong Kong.

“It could be just a wave of experiments,” he said of the latest service disruptions.

Apple’s New iPads Embrace Facial Recognition

Apple’s new iPads will resemble its latest iPhones as the company ditches a home button and fingerprint sensor to make room for the screen.

 

As with the iPhone XR and XS models, the new iPad Pro will use facial-recognition technology to unlock the device and authorize app and Apple Pay purchases.

 

Apple also unveiled new Mac models at an opera house in New York, where the company emphasized artistic uses for its products such as creating music, video and sketches. New Macs include a MacBook Air laptop with a better screen.

 

Research firm IDC says tablet sales have been declining overall, though Apple saw a 3 percent increase in iPad sales last year to nearly 44 million, commanding a 27 percent market share.

 

UN Human Rights Expert Urges States to Curb Intolerance Online

Following the shooting deaths of 11 worshippers at a synagogue in the eastern United States, a U.N. human rights expert urged governments on Monday to do more to curb racist and anti-Semitic intolerance, especially online.

“That event should be a catalyst for urgent action against hate crimes, but also a reminder to fight harder against the current climate of intolerance that has made racist, xenophobic and anti-Semitic attitudes and beliefs more acceptable,” U.N. Special Rapporteur Tendayi Achiume said of Saturday’s attack on a synagogue in Pittsburgh, Pennsylvania.

Achiume, whose mandate is the elimination of racism, racial discrimination, xenophobia and related intolerance, noted in her annual report that “Jews remain especially vulnerable to anti-Semitic attacks online.”

She said that Nazi and neo-Nazi groups exploit the internet to spread and incite hate because it is “largely unregulated, decentralized, cheap” and anonymous.

Achiume, a law professor at the University of California, Los Angeles (UCLA) School of Law, said neo-Nazi groups are increasingly relying on the internet and social media platforms to recruit new members.

Facebook, Twitter and YouTube are among their favorites.

On Facebook, for example, hate groups connect with sympathetic supporters and use the platform to recruit new members, organize events and raise money for their activities. YouTube, which has over 1.5 billion viewers each month, is another critical communications tool for propaganda videos and even neo-Nazi music videos. On Twitter, according to one 2012 study cited in the special rapporteur’s report, the presence of white nationalist movements on that platform has increased by more than 600 percent.

The special rapporteur noted that while digital technology has become an integral and positive part of most people’s lives, “these developments have also aided the spread of hateful movements.”

She said in the past year, platforms including Facebook, Twitter and YouTube have banned individual users who have contributed to hate movements or threatened violence, but ensuring the removal of racist content online remains difficult.

Some hate groups try to get around raising red flags by using racially coded messaging, which makes it harder for social media platforms to recognize their hate speech and shut down their presence.

Achiume cited as an example the use of a cartoon character “Pepe the Frog,” which was appropriated by members of neo-Nazi and white supremacist groups and was widely displayed during a white supremacist rally in the southern U.S. city of Charlottesville, Virginia, in 2017.

The special rapporteur welcomed actions in several states to counter intolerance online, but cautioned it must not be used as a pretext for censorship and other abuses. She also urged governments to work with the private sector — specifically technology companies — to fight such prejudices in the digital space.

UN Human Rights Expert Urges States to Curb Intolerance Online

Following the shooting deaths of 11 worshippers at a synagogue in the eastern United States, a U.N. human rights expert urged governments on Monday to do more to curb racist and anti-Semitic intolerance, especially online.

“That event should be a catalyst for urgent action against hate crimes, but also a reminder to fight harder against the current climate of intolerance that has made racist, xenophobic and anti-Semitic attitudes and beliefs more acceptable,” U.N. Special Rapporteur Tendayi Achiume said of Saturday’s attack on a synagogue in Pittsburgh, Pennsylvania.

Achiume, whose mandate is the elimination of racism, racial discrimination, xenophobia and related intolerance, noted in her annual report that “Jews remain especially vulnerable to anti-Semitic attacks online.”

She said that Nazi and neo-Nazi groups exploit the internet to spread and incite hate because it is “largely unregulated, decentralized, cheap” and anonymous.

Achiume, a law professor at the University of California, Los Angeles (UCLA) School of Law, said neo-Nazi groups are increasingly relying on the internet and social media platforms to recruit new members.

Facebook, Twitter and YouTube are among their favorites.

On Facebook, for example, hate groups connect with sympathetic supporters and use the platform to recruit new members, organize events and raise money for their activities. YouTube, which has over 1.5 billion viewers each month, is another critical communications tool for propaganda videos and even neo-Nazi music videos. On Twitter, according to one 2012 study cited in the special rapporteur’s report, the presence of white nationalist movements on that platform has increased by more than 600 percent.

The special rapporteur noted that while digital technology has become an integral and positive part of most people’s lives, “these developments have also aided the spread of hateful movements.”

She said in the past year, platforms including Facebook, Twitter and YouTube have banned individual users who have contributed to hate movements or threatened violence, but ensuring the removal of racist content online remains difficult.

Some hate groups try to get around raising red flags by using racially coded messaging, which makes it harder for social media platforms to recognize their hate speech and shut down their presence.

Achiume cited as an example the use of a cartoon character “Pepe the Frog,” which was appropriated by members of neo-Nazi and white supremacist groups and was widely displayed during a white supremacist rally in the southern U.S. city of Charlottesville, Virginia, in 2017.

The special rapporteur welcomed actions in several states to counter intolerance online, but cautioned it must not be used as a pretext for censorship and other abuses. She also urged governments to work with the private sector — specifically technology companies — to fight such prejudices in the digital space.

How Green Is My Forest? There’s an App to Tell You

A web-based application that monitors the impact of successful forest-rights claims can help rural communities manage resources better and improve their livelihoods, according to analysts.

The app was developed by the Indian School of Business (ISB) to track community rights in India, where the 2006 Forest Rights Act aimed to improve the lives of rural people by recognizing their entitlement to inhabit and live off forests.

With a smartphone or tablet, the app can be used to track the status of a community rights claim.

After the claim is approved, community members can use it to collect data on tree cover, burned areas and other changes in the forest and analyze it, said Arvind Khare at Washington D.C.-based advocacy Rights and Resources Initiative (RRI).

“Even in areas that have made great progress in awarding rights, it is very hard to track the socio-ecological impact of the rights on the community,” said Khare, a senior director at RRI, which is testing the app in India.

“Recording the data and analyzing it can tell you which resources need better management, so that these are not used haphazardly, but in a manner that benefits them most,” he told the Thomson Reuters Foundation.

For example, community members can record data on forest products they use such as leaves, flowers, wood and sap, making it easier to ensure that they are not over-exploited, he said.

While indigenous and local communities own more than half the world’s land under customary rights, they have secure legal rights to only 10 percent, according to RRI.

Governments maintain legal and administrative authority over more than two-thirds of global forest area, giving limited access for local communities.

In India, under the 2006 law, at least 150 million people could have their rights recognized to about 40 million hectares (154,400 sq miles) of forest land.

But rights to only 3 percent of land have been granted, with states largely rejecting community claims, campaigners say.

While the app is being tested in India, Khare said it can also be used in countries including Peru, Mali, Liberia and Indonesia, where RRI supports rural communities in scaling up forest rights claims.

Data can be entered offline on the app, and then uploaded to the server when the device is connected to the internet. Data is stored in the cloud and accessible to anyone, said Ashwini Chhatre, an associate professor at ISB.

“All this while local communities have been fighting simply for the right to live in the forest and use its resources. Now, they can use data to truly benefit from it,” he said.

App Taken Down After Pittsburgh Gunman Revealed as User

Gab, a social networking site often accused of being a haven for white supremacists, neo-Nazis and other hate groups, went offline Monday after being refused by several web hosting providers following revelations that Pittsburgh synagogue shooting suspect Robert Bowers used the platform to threaten Jews.

“Gab isn’t going anywhere,” said Andrew Torba, chief executive officer and creator of Gab.com. “We will exercise every possible avenue to keep Gab online and defend free speech and individual liberty for all people.

Founded two years ago as an alternative to mainstream social networking sites like Facebook and Twitter, Torba billed Gab as a haven for free speech. The site soon began attracting online members of the alt-right and other extremist ideologies unwelcome on other platforms.

“What makes the entirely left-leaning Big Social monopoly qualified to tell us what is ‘news’ and what is ‘trending’ and to define what “harassment” means?” Torba wrote in a 2016 email to Buzzfeed News.

The tide swiftly turned against Gab after Bowers entered the Tree of Life synagogue Saturday morning with an assault rifle and several handguns, killing 11 and wounding six.

It came to light that Bowers had made several anti-Semitic posts on the site, including one the morning of the shooting that read “HIAS likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in.” HIAS (Hebrew Immigration Aid Society) helps refugees resettle in the United States.

Following Bowers’ posts being picked up by national media, PayPal and payment processor Stripe announced that they would be ending their relationship with Gab. Hosting providers followed soon after, and the website was nonfunctional by Monday morning.

In an interview with NPR aired Monday, Torba defended leaving up Bowers’ post from the morning of the shooting.

“Do you see a direct threat in there?” Torba said. “Because I don’t. What would you expect us to do with a post like that? You want us to just censor anybody who says the phrase ‘I’m going in’? Because that’s just absurd.”

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Teen’s Program Could Improve Pancreatic Cancer Treatment

Pancreatic cancer treatment could become more advanced with help from 13-year-old Rishab Jain. He’s created a tool for doctors to locate the hard-to-find pancreas more quickly and precisely during cancer treatment. The teen recently won a prestigious young scientist award for his potentially game-changing idea. VOA’s Julie Taboh has more.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Plant Fibers Make Stronger Concrete

It may surprise you that cement is responsible for 7 percent of the world’s carbon emissions. That’s because it takes a lot of heat to produce the basic powdery base of cement that eventually becomes concrete. But it turns out that simple fibers from carrots could not only reduce that carbon footprint but also make concrete stronger. VOA’s Kevin Enochs reports.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Q&A: Facebook Describes How It Detects ‘Inauthentic Behavior’

Facebook announced Friday that it had removed 82 Iranian-linked accounts on Facebook and Instagram. A Facebook spokesperson answered VOA’s questions about its process and efforts to detect what it calls “coordinated inauthentic behavior” by accounts pretending to be U.S. and U.K. citizens and aimed at U.S. and U.K. audiences.

Q: Facebook’s post says there were 7 “events hosted.” Any details about where, when, who?

A: Of seven events, the first was scheduled for February 2016, and the most recent was scheduled for June 2018. One hundred and ten people expressed interest in at least one of these events, and two events received no interest. We cannot confirm whether any of these events actually occurred. Some appear to have been planned to occur only online. The themes are similar to the rest of the activity we have described.

Q: Is there any indication this was an Iranian government-linked program?

A: We recently discussed the challenges involved with determining who is behind information operations. In this case, we have not been able to determine any links to the Iranian government, but we are continuing to investigate. Also, Atlantic Council’s Digital Forensic Research Lab has shared their take on the content in this case here.

​Q: How long was the time between discovering this and taking down the pages?

A: We first detected this activity one week ago. As soon as we detected this activity, the teams in our elections war room worked quickly to investigate and remove these bad actors. Given the elections, we took action as soon as we’d completed our initial investigation and shared the information with U.S. and U.K. government officials, U.S. law enforcement, Congress, other technology companies and the Atlantic Council’s Digital Forensic Research Lab.

Q: How have you improved the reporting processes in the past year to speed the ability to remove such content?

A: Just to clarify, today’s takedown was a result of our teams proactively discovering suspicious signals on a page that appeared to be run by Iranian users. From there, we investigated and found the set of pages, groups and accounts that we removed today.

To your broader question on how we’ve improved over the past two years: To ensure that we stay ahead, we’ve invested heavily in better technology and more people. There are now over 20,000 people working on safety and security at Facebook, and thanks to improvements in artificial intelligence we detect many fake accounts, the root cause of so many issues, before they are even created. We’re also working more closely with governments, law enforcement, security experts and other companies because no one organization can do this on its own.

Q: How many people do you have monitoring content in English now? In Persian?

A: We have over 7,500 content reviewers globally. We don’t provide breakdowns of the number of people working in specific languages or regions because that alone doesn’t reflect the number of people working to review content for a particular country or region at any particular time.

Q: How are you training people to spot this content? What’s the process?

A: To be clear, today’s takedown was the result of an internal investigation involving a combination of manual work by our teams of skilled investigators and data science teams using automated tools to look for larger patterns to identify potentially inauthentic behavior. In this case, we relied on both of these techniques working together.

On your separate question about training content reviewers, here is more on our content reviewers and how we support them.

Q: Does Facebook have any more information on how effective this messaging is at influencing behavior?

A: We aren’t in a position to know.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Study: Online Attacks on Jews Ramp Up Before Election Day

Far-right extremists have ramped up an intimidating wave of anti-Semitic harassment against Jewish journalists, political candidates and others ahead of next month’s U.S. midterm elections, according to a report released Friday by a Jewish civil rights group.

The Anti-Defamation League’s report says its researchers analyzed more than 7.5 million Twitter messages from Aug. 31 to Sept. 17 and found nearly 30 percent of the accounts repeatedly tweeting derogatory terms about Jews appeared to be automated “bots.”

But accounts controlled by real-life humans often mount the most “worrisome and harmful” anti-Semitic attacks, sometimes orchestrated by leaders of neo-Nazi or white nationalist groups, the researchers said.

“Both anonymity and automation have been used in online propaganda offensives against the Jewish community during the 2018 midterms,” they wrote.

Billionaire philanthropist George Soros was a leading subject of harassing tweets. Soros, a Hungarian-born Jew demonized by right-wing conspiracy theorists, is one of the prominent Democrats who had pipe bombs sent to them this week.

The ADL’s study concludes online disinformation and abuse is disproportionately targeting Jews in the U.S. “during this crucial political moment.”

“Prior to the election of President Donald Trump, anti-Semitic harassment and attacks were rare and unexpected, even for Jewish Americans who were prominently situated in the public eye. Following his election, anti-Semitism has become normalized and harassment is a daily occurrence,” the report says.

The New York City-based ADL has commissioned other studies of online hate, including a report in May that estimated about 3 million Twitter users posted or re-posted at least 4.2 million anti-Semitic tweets in English over a 12-month period ending Jan. 28. An earlier report from the group said anti-Semitic incidents in the U.S. in the previous year had reached the highest tally it has counted in more than two decades.

For the latest report, researchers interviewed five Jewish people, including two recent political candidates, who had faced “human-based attacks” against them on social media this year. Their experiences demonstrated that anti-Semitic harassment “has a chilling effect on Jewish Americans’ involvement in the public sphere,” their report says.

“While each interview subject spoke of not wanting to let threats of the trolls affect their online activity, political campaigns, academic research or news reporting, they all admitted the threats of violence and deluges of anti-Semitism had become part of their internal equations,” researchers wrote.

The most popular term used in tweets containing #TrumpTrain was “Soros.” The study also found a “surprising” abundance of tweets referencing “QAnon,” a right-wing conspiracy theory that started on an online message board and has been spread by Trump supporters.

“There are strong anti-Semitic undertones, as followers decry George Soros and the Rothschild family as puppeteers,” researchers wrote.

Facebook Removes 82 Iranian-Linked Accounts

Facebook announced Friday that it has removed 82 accounts, pages or groups from its site and Instagram that originated in Iran, with some of the account owners posing as residents of the United States or Britain and tweeting about liberal politics.

At least one of the Facebook pages had more than one million followers, the firm said. The company said it did not know if the coordinated behavior was tied to the Iranian government. Less than $100 in advertising on Facebook and Instagram was spent to amplify the posts, the firm said.

The company said in a post titled “Taking Down Coordinated Inauthentic Behavior from Iran” that some of the accounts and pages were tied to ones taken down in August.

“Today we removed multiple pages, groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram,” the firm said. “This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing.”

Monitoring online activity

Facebook says it has ramped up its monitoring of the authenticity of accounts in the runup to the U.S. midterm election, with more than 20,000 people working on safety and security. The social media firm says it has created an election “war room” on the campus to monitor behavior it deems “inauthentic.”

Nathaniel Gleicher, head of cybersecurity policy for Facebook, said that the behavior was coordinated and originated in Iran.

The posts appeared as if they were being made by citizens in the United States and in a few cases, in Britain. The posts were of “politically charged topics such as race relations, opposition to the president, and immigration.”

In terms of the reach of the posts, “about 1.02 million accounts followed at least one of these Pages, about 25,000 accounts joined at least one of these groups, and more than 28,000 accounts followed at least one of these Instagram accounts.”

A more advanced approach

The company released some images related to the accounts. 

An analysis of 10 Facebook pages and 14 Instagram accounts by the Atlantic Council’s Digital Forensic Research Lab concluded the pages and accounts were newer, and more advanced, than another batch of Iranian-linked pages and accounts that were removed in August.

“These assets were designed to engage in, rather than around, the political dialogue,” the lab’s Ben Nimmo and Graham Brookie wrote. “Their behavior showed how much they had adapted from earlier operations, focusing more on social media than third party websites.”

And those behind the accounts appeared to have learned a lesson from Russia’s ongoing influence campaign.

“One main aim of the Iranian group of accounts was to inflame America’s partisan divides,” the analysis said. “The tone of the comments added to the posts suggests that this had some success.”

Targeting U.S. midterm voters

Some of the accounts and pages directly targeted the upcoming U.S. elections, showing individuals talking about how they voted or calling on others to vote.

Most were aimed at a liberal audience.

“Proud to say that my first ever vote was for @BetoORourke,” said one post from an account called “No racism no war,” which had 412,000 likes and about half a million followers.

“Get your ass out and VOTE!!! Do your part,” said another post shared by the same account.

U.S. intelligence and national security officials have repeatedly warned of efforts by countries like Iran and China, in addition to Russia, to influence and interfere with U.S. elections next month and in 2020.

Democratic Representative Adam Schiff, who is a ranking member of the House Intelligence Committee, said Facebook’s decision to pull down the questionable pages and accounts and share the information with the public is critical to “keeping users aware of and inoculated against such foreign influence campaigns.”

“Facebook’s discovery and exposure of additional nefarious Iranian activity on its platforms so close to the midterms is an important reminder that both the public and private sector have a shared responsibility to remain vigilant as foreign entities continue their attempts to influence our political dialogue online,” Schiff said in a statement.

But not all the Iranian material was focused on the U.S. midterm election.

“These accounts masqueraded primarily as American liberals, posting only small amounts of anti-Saudi and anti-Israeli content,” the Digital Forensic Research Lab said.

A number of posts also took aim at U.S. policy in the Middle East in general. One post by @sut_racism, accused Ivanka Trump of having “the blood of Dead Children on Her Hands.”

Still, the analysts said many of the posts also contained errors that gave away their non-U.S. origins. For example, in one post talking about the deaths of U.S. soldiers in World War II, the account’s authors used a photo of Soviet soldiers.

Michelle Quinn contributed to this report.

Facebook Removes 82 Iranian-Linked Accounts

Facebook announced Friday that it has removed 82 accounts, pages or groups from its site and Instagram that originated in Iran, with some of the account owners posing as residents of the United States or Britain and tweeting about liberal politics.

At least one of the Facebook pages had more than one million followers, the firm said. The company said it did not know if the coordinated behavior was tied to the Iranian government. Less than $100 in advertising on Facebook and Instagram was spent to amplify the posts, the firm said.

The company said in a post titled “Taking Down Coordinated Inauthentic Behavior from Iran” that some of the accounts and pages were tied to ones taken down in August.

“Today we removed multiple pages, groups and accounts that originated in Iran for engaging in coordinated inauthentic behavior on Facebook and Instagram,” the firm said. “This is when people or organizations create networks of accounts to mislead others about who they are, or what they’re doing.”

Monitoring online activity

Facebook says it has ramped up its monitoring of the authenticity of accounts in the runup to the U.S. midterm election, with more than 20,000 people working on safety and security. The social media firm says it has created an election “war room” on the campus to monitor behavior it deems “inauthentic.”

Nathaniel Gleicher, head of cybersecurity policy for Facebook, said that the behavior was coordinated and originated in Iran.

The posts appeared as if they were being made by citizens in the United States and in a few cases, in Britain. The posts were of “politically charged topics such as race relations, opposition to the president, and immigration.”

In terms of the reach of the posts, “about 1.02 million accounts followed at least one of these Pages, about 25,000 accounts joined at least one of these groups, and more than 28,000 accounts followed at least one of these Instagram accounts.”

A more advanced approach

The company released some images related to the accounts. 

An analysis of 10 Facebook pages and 14 Instagram accounts by the Atlantic Council’s Digital Forensic Research Lab concluded the pages and accounts were newer, and more advanced, than another batch of Iranian-linked pages and accounts that were removed in August.

“These assets were designed to engage in, rather than around, the political dialogue,” the lab’s Ben Nimmo and Graham Brookie wrote. “Their behavior showed how much they had adapted from earlier operations, focusing more on social media than third party websites.”

And those behind the accounts appeared to have learned a lesson from Russia’s ongoing influence campaign.

“One main aim of the Iranian group of accounts was to inflame America’s partisan divides,” the analysis said. “The tone of the comments added to the posts suggests that this had some success.”

Targeting U.S. midterm voters

Some of the accounts and pages directly targeted the upcoming U.S. elections, showing individuals talking about how they voted or calling on others to vote.

Most were aimed at a liberal audience.

“Proud to say that my first ever vote was for @BetoORourke,” said one post from an account called “No racism no war,” which had 412,000 likes and about half a million followers.

“Get your ass out and VOTE!!! Do your part,” said another post shared by the same account.

U.S. intelligence and national security officials have repeatedly warned of efforts by countries like Iran and China, in addition to Russia, to influence and interfere with U.S. elections next month and in 2020.

Democratic Representative Adam Schiff, who is a ranking member of the House Intelligence Committee, said Facebook’s decision to pull down the questionable pages and accounts and share the information with the public is critical to “keeping users aware of and inoculated against such foreign influence campaigns.”

“Facebook’s discovery and exposure of additional nefarious Iranian activity on its platforms so close to the midterms is an important reminder that both the public and private sector have a shared responsibility to remain vigilant as foreign entities continue their attempts to influence our political dialogue online,” Schiff said in a statement.

But not all the Iranian material was focused on the U.S. midterm election.

“These accounts masqueraded primarily as American liberals, posting only small amounts of anti-Saudi and anti-Israeli content,” the Digital Forensic Research Lab said.

A number of posts also took aim at U.S. policy in the Middle East in general. One post by @sut_racism, accused Ivanka Trump of having “the blood of Dead Children on Her Hands.”

Still, the analysts said many of the posts also contained errors that gave away their non-U.S. origins. For example, in one post talking about the deaths of U.S. soldiers in World War II, the account’s authors used a photo of Soviet soldiers.

Michelle Quinn contributed to this report.

UK Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

British regulators slapped Facebook on Thursday with a fine of 500,000 pounds ($644,000) — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal.

At the same time, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities.

Britain’s Information Commissioner Office found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge.

“Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data,” said Elizabeth Denham, the information commissioner. “A company of its size and expertise should have known better and it should have done better.”

The ICO said a subset of the data was later shared with other organizations, including SCL Group, the parent company of political consultancy Cambridge Analytica, which counted U.S. President Donald Trump’s 2016 election campaign among its clients. News that the consultancy had used data from tens of millions of Facebook accounts to profile voters ignited a global scandal on data rights.

The fine amounts to a speck on Facebook’s finances. In the second quarter, the company generated revenue at a rate of nearly $100,000 per minute. That means it will take less than seven minutes for Facebook to bring in enough money to pay for the fine.

But it’s the maximum penalty allowed under the law at the time the breach occurred. Had the scandal taken place after new EU data protection rules went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4 percent of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4 percent of its revenue last year.

The data rules are tougher than the ones in the United States, and a debate is ongoing on how the U.S. should respond. California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

Facebook CEO Mark Zuckerberg said in a video message to a big data privacy conference in Brussels this week that “we have a lot more work to do” to safeguard personal data.

About the U.K. fine, Facebook responded in a statement that it is reviewing the decision.

“While we respectfully disagree with some of their findings, we have said before that we should have done more to investigate claims about Cambridge Analytica and taken action in 2015. We are grateful that the ICO has acknowledged our full cooperation throughout their investigation.”

Facebook also took solace in the fact that the ICO did not definitively assert that U.K. users had their data shared for campaigning. But the commissioner noted in her statement that “even if Facebook’s assertion is correct,” U.S. residents would have used the site while visiting the U.K.

EU lawmakers had summoned Zuckerberg in May to testify about the Cambridge Analytica scandal.

In their vote on Thursday, they said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.”

The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers.

 

UK Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

British regulators slapped Facebook on Thursday with a fine of 500,000 pounds ($644,000) — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal.

At the same time, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities.

Britain’s Information Commissioner Office found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge.

“Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data,” said Elizabeth Denham, the information commissioner. “A company of its size and expertise should have known better and it should have done better.”

The ICO said a subset of the data was later shared with other organizations, including SCL Group, the parent company of political consultancy Cambridge Analytica, which counted U.S. President Donald Trump’s 2016 election campaign among its clients. News that the consultancy had used data from tens of millions of Facebook accounts to profile voters ignited a global scandal on data rights.

The fine amounts to a speck on Facebook’s finances. In the second quarter, the company generated revenue at a rate of nearly $100,000 per minute. That means it will take less than seven minutes for Facebook to bring in enough money to pay for the fine.

But it’s the maximum penalty allowed under the law at the time the breach occurred. Had the scandal taken place after new EU data protection rules went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4 percent of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4 percent of its revenue last year.

The data rules are tougher than the ones in the United States, and a debate is ongoing on how the U.S. should respond. California is moving to put in regulations similar to the EU’s strict rules by 2020 and other states are mulling more aggressive laws. That’s rattled the big tech companies, which are pushing for a federal law that would treat them more leniently.

Facebook CEO Mark Zuckerberg said in a video message to a big data privacy conference in Brussels this week that “we have a lot more work to do” to safeguard personal data.

About the U.K. fine, Facebook responded in a statement that it is reviewing the decision.

“While we respectfully disagree with some of their findings, we have said before that we should have done more to investigate claims about Cambridge Analytica and taken action in 2015. We are grateful that the ICO has acknowledged our full cooperation throughout their investigation.”

Facebook also took solace in the fact that the ICO did not definitively assert that U.K. users had their data shared for campaigning. But the commissioner noted in her statement that “even if Facebook’s assertion is correct,” U.S. residents would have used the site while visiting the U.K.

EU lawmakers had summoned Zuckerberg in May to testify about the Cambridge Analytica scandal.

In their vote on Thursday, they said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.”

The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers.

 

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Google Abandons Berlin Campus Plan After Locals Protest

Google is abandoning plans to establish a campus for tech startups in Berlin after protests from residents worried about gentrification.

The internet giant confirmed reports Thursday it will sublet the former electrical substation in the capital’s Kreuzberg district to two charitable organizations, Betterplace.org and Karuna.

Google has more than a dozen so-called campuses around the world. They are intended as hubs to bring together potential employees, startups and investors.

Protesters had recently picketed the Umspannwerk site with placards such as “Google go home.”

Karuna, which helps disadvantaged children, said Google will pay 14 million euros ($16 million) toward renovation and maintenance for the coming five years.

Google said it will continue to work with startups in Berlin, which has become a magnet for tech companies in Germany in recent years.

Hi-Tech Cameras Spy Fugitive Emissions

The technology used in space missions can be expensive but it has some practical benefits here on Earth. Case in point: the thousands of high resolution images taken from the surface of Mars, collected by the two Mars rovers – Spirit and Opportunity. Now researchers at Carnegie Mellon University, in Pittsburgh, are using the same technology to analyze air pollution here on our planet. VOA’s George Putic reports.