Category Archives: Technology

Silicon valley & technology news. Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life

Report: Russia Has Access to UK Visa Processing

Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications.

The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas.

The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain.

The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center’s CCTV cameras and had a diagram of the center’s computer network. The two outlets say they have obtained the man’s deposition to the U.S. authorities but have decided against publishing the man’s name, for his own safety.

The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year.

The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications.

The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a “backdoor” to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017.

In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents’ real names and the media, including The Associated Press, were able to corroborate their real identities.

The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.

Report: Russia Has Access to UK Visa Processing

Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications.

The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas.

The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain.

The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center’s CCTV cameras and had a diagram of the center’s computer network. The two outlets say they have obtained the man’s deposition to the U.S. authorities but have decided against publishing the man’s name, for his own safety.

The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year.

The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications.

The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a “backdoor” to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017.

In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents’ real names and the media, including The Associated Press, were able to corroborate their real identities.

The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.

Tech Firm Pays Refugees to Train AI Algorithms

Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday.

REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI.

Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency.

People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London.

“This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues.

$20 a day for AI work

A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said.

REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts.

The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen.

The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says.

“This would give them the ability to rebuild a life … and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation.

Teaching the machines

AI is the development of computer systems that can perform tasks that normally require human intelligence.

It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers.

In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention.

REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years.

Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs.

Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.

Tech Firm Pays Refugees to Train AI Algorithms

Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday.

REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI.

Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency.

People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London.

“This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues.

$20 a day for AI work

A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said.

REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts.

The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen.

The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says.

“This would give them the ability to rebuild a life … and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation.

Teaching the machines

AI is the development of computer systems that can perform tasks that normally require human intelligence.

It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers.

In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention.

REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years.

Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs.

Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.

Facebook CEO Details Company Battle with Hate Speech, Violent Content

Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site.

The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions.

In the future, if a person disagrees with Facebook’s decision, he or she will be able to appeal to an independent review board.

Facebook “shouldn’t be making so many important decisions about free expression and safety on our own,” Facebook CEO Mark Zuckerberg said in a call with reporters Thursday.

But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go.

“There are issues you never fix,” he said. “There’s going to be ongoing content issues.”

Company’s actions

In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. 

The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out.

Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. “But to suggest that we didn’t want to know is simply untrue,” he said.

Zuckerberg also said he didn’t know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm.

“It may be normal in Washington, but it’s not the kind of thing I want Facebook associated with, which is why we won’t be doing it,” Zuckerberg said.

The firm posted a rebuttal to the Times story.

Content removed

Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months.

But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said.

One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls “borderline content” so it gets less distribution and engagement.

“By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less-polarized discourse where more people feel safe participating,” Zuckerberg wrote in a post. 

Critics of the company, however, said Zuckerberg hasn’t gone far enough to address the inherent problems of Facebook, which has 2 billion users.

“We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality,” said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C.

VOA national security correspondent Jeff Seldin contributed to this report.

Facebook CEO Details Company Battle with Hate Speech, Violent Content

Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site.

The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions.

In the future, if a person disagrees with Facebook’s decision, he or she will be able to appeal to an independent review board.

Facebook “shouldn’t be making so many important decisions about free expression and safety on our own,” Facebook CEO Mark Zuckerberg said in a call with reporters Thursday.

But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go.

“There are issues you never fix,” he said. “There’s going to be ongoing content issues.”

Company’s actions

In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. 

The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out.

Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. “But to suggest that we didn’t want to know is simply untrue,” he said.

Zuckerberg also said he didn’t know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm.

“It may be normal in Washington, but it’s not the kind of thing I want Facebook associated with, which is why we won’t be doing it,” Zuckerberg said.

The firm posted a rebuttal to the Times story.

Content removed

Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months.

But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said.

One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls “borderline content” so it gets less distribution and engagement.

“By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less-polarized discourse where more people feel safe participating,” Zuckerberg wrote in a post. 

Critics of the company, however, said Zuckerberg hasn’t gone far enough to address the inherent problems of Facebook, which has 2 billion users.

“We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality,” said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C.

VOA national security correspondent Jeff Seldin contributed to this report.

Realistic Masks Made in Japan Find Demand from Tech, Car Companies

Super-realistic face masks made by a tiny company in rural Japan are in demand from the domestic tech and entertainment industries and from countries as far away as Saudi Arabia.

The 300,000-yen ($2,650) masks, made of resin and plastic by five employees at REAL-f Co., attempt to accurately duplicate an individual’s face down to fine wrinkles and skin texture.

Company founder Osamu Kitagawa came up with the idea while working at a printing machine manufacturer.

But it took him two years of experimentation before he found a way to use three-dimensional facial data from high-quality photographs to make the masks, and started selling them in 2011.

The company, based in the western prefecture of Shiga, receives about 100 orders every year from entertainment, automobile, technology and security companies, mainly in Japan.

For example, a Japanese car company ordered a mask of a sleeping face to improve its facial recognition technology to detect if a driver had dozed off, Kitagawa said.

“I am proud that my product is helping further development of facial recognition technology,” he added. “I hope that the developers would enhance face identification accuracy using these realistic masks.”

Kitagawa, 60, said he had also received orders from organizations linked to the Saudi government to create masks for the king and princes.

“I was told the masks were for portraits to be displayed in public areas,” he said.

Kitagawa said he works with clients carefully to ensure his products will not be used for illicit purposes and cause security risks, but added he could not rule out such threats.

He said his goal was to create 100 percent realistic masks, and he hoped to use softer materials, such as silicon, in the future.

“I would like these masks to be used for medical purposes, which is possible once they can be made using soft materials,” he said. “And as humanoid robots are being developed, I hope this will help developers to create [more realistic robots] at a low cost.”

Sentient Office Buildings Adjust to Workers’ Personal Comfort and Well Being

Office workers often complain that the building is either too hot or too cold. Now, engineers and architects are working on creating “sentient buildings” that can cater to the personal needs and well being of each employee in the hopes of increasing productivity. VOA’S Elizabeth Lee has this report from Los Angeles.

Sentient Office Buildings Adjust to Workers’ Personal Comfort and Well Being

Office workers often complain that the building is either too hot or too cold. Now, engineers and architects are working on creating “sentient buildings” that can cater to the personal needs and well being of each employee in the hopes of increasing productivity. VOA’S Elizabeth Lee has this report from Los Angeles.

Debut of China AI Anchor Stirs up Tech Race Debates

China’s state-run Xinhua News has debuted what it called the world’s first artificial intelligence (AI) anchor. But the novelty has generated more dislikes than likes online among Chinese netizens, with many calling the new virtual host “a news-reading device without a soul.”

Analysts say the latest creation has showcased China’s short-term progress in voice recognition, text mining and semantic analysis, but challenges remain ahead for its long-term ambition of becoming an AI superpower by 2030.

Nonhuman anchors

Collaborating with Chinese search engine Sogou, Xinhua introduced two AI anchors, one for English broadcasts and the other for Chinese, both of which are based on images of the agency’s real newscasters, Zhang Zhao and Qiu Hao respectively.

In its inaugural broadcast last week, the English-speaking anchor was more tech cheerleader than newshound, rattling off lines few anchors would be caught dead reading, such as: “the development of the media industry calls for continuous innovation and deep integration with the international advanced technologies.”

It also promised “to work tirelessly to keep you [audience] informed as texts will be typed into my system uninterrupted” 24/7 across multiple platforms simultaneously if necessary, according to the news agency.

No soul

Local audiences appear to be unimpressed, critiquing the news bots’ not so human touch and synthesized voices.

On Weibo, China’s Twitterlike microblogging platform, more than one user wrote that such anchors have “no soul,” in response to Xinhua’s announcement. And one user joked: “what if we have an AI [country] leader?” while another questioned what it stands for in terms of journalistic values by saying “What a nutcase. Fake news is on every day.”

Others pondered the implication AI news bots might have on employment and workers.

“It all comes down to production costs, which will determine if [we] lose jobs,” one Weibo user wrote. Some argued that only low-end labor-intensive jobs will be easily replaced by intelligent robots while others gloated about the possibility of employers utilizing an army of low-cost robots to make a fortune.

A simple use case

Industry experts said the digital anchor system is based on images of real people and possibly animated parts of their mouths and faces, with machine-learning technology recreating humanlike speech patterns and facial movements. It then uses a synthesized voice for the delivery of the news broadcast.

The creation showcases China’s progress in voice recognition, text mining and semantic analysis, all of which is covered by natural language processing, according to Liu Chien-chih, secretary-general of Asia IoT Alliance (AIOTA).

But that’s just one of many aspects of AI technologies, he wrote in an email to VOA.

Given the pace of experimental AI adoption by Chinese businesses, more user scenarios or designs of user interface can be anticipated in China, Liu added.

Chris Dong, director of China research at the market intelligence firm IDC, agreed the digital anchor is as simple as what he calls a “use case” for AI-powered services to attract commercials and audiences.

He said, in an email to VOA, that China has fast-tracked its big data advantage around consumers or internet of things (IoT) infrastructure to add commercial value.

Artificial Intelligence has also allowed China to accelerate its digital transformation across various industries or value chains, which are made smarter and more efficient, Dong added.

Far from a threat to the US

But both said China is far from a threat to challenge U.S. leadership on AI given its lack of an open market and respect for intellectual property rights (IPRs) as well as its lagging innovative competency on core AI technologies.

Earlier, Lee Kai-fu, a well-known venture capitalist who led Google before it pulled out of China, was quoted by news website Tech Crunch as saying that the United States may have created Artificial Intelligence, but China is taking the ball and running with it when it comes to one of the world’s most pivotal technology innovations.

Lee summed up four major drivers behind his observation that China is beating the United States in AI: abundant data, hungry entrepreneurs, growing AI expertise and massive government support and funding.

Beijing has set a goal to become an AI superpower by 2030, and to turn the sector into a $150 billion industry.

Yet, IDC’s Dong cast doubts on AI’s adoption rate and effectiveness in China’s traditional sectors. Some, such as the manufacturing sector, is worsening, he said.

He said China’s “state capitalism may have its short-term efficiency and gain, but over the longer-term, it is the open market that is fundamental to building an effective innovation ecosystem.”

The analyst urges China to open up and include multinational software and services to contribute to its digital economic transformation.

“China’s ‘Made-in-China 2025’ should go back to the original flavor … no longer Made and Controlled by Chinese, but more [of] an Open Platform of Made-in-China that both local and foreign players have a level-playing field,” he said.

In addition to a significant gap in core technologies, China’s failure to uphold IPRs will go against its future development of AI software, “which is often sold many-fold in the U.S. than in China as the Chinese tend to think intangible assets are free,” AIOTA’s Liu said.

Debut of China AI Anchor Stirs up Tech Race Debates

China’s state-run Xinhua News has debuted what it called the world’s first artificial intelligence (AI) anchor. But the novelty has generated more dislikes than likes online among Chinese netizens, with many calling the new virtual host “a news-reading device without a soul.”

Analysts say the latest creation has showcased China’s short-term progress in voice recognition, text mining and semantic analysis, but challenges remain ahead for its long-term ambition of becoming an AI superpower by 2030.

Nonhuman anchors

Collaborating with Chinese search engine Sogou, Xinhua introduced two AI anchors, one for English broadcasts and the other for Chinese, both of which are based on images of the agency’s real newscasters, Zhang Zhao and Qiu Hao respectively.

In its inaugural broadcast last week, the English-speaking anchor was more tech cheerleader than newshound, rattling off lines few anchors would be caught dead reading, such as: “the development of the media industry calls for continuous innovation and deep integration with the international advanced technologies.”

It also promised “to work tirelessly to keep you [audience] informed as texts will be typed into my system uninterrupted” 24/7 across multiple platforms simultaneously if necessary, according to the news agency.

No soul

Local audiences appear to be unimpressed, critiquing the news bots’ not so human touch and synthesized voices.

On Weibo, China’s Twitterlike microblogging platform, more than one user wrote that such anchors have “no soul,” in response to Xinhua’s announcement. And one user joked: “what if we have an AI [country] leader?” while another questioned what it stands for in terms of journalistic values by saying “What a nutcase. Fake news is on every day.”

Others pondered the implication AI news bots might have on employment and workers.

“It all comes down to production costs, which will determine if [we] lose jobs,” one Weibo user wrote. Some argued that only low-end labor-intensive jobs will be easily replaced by intelligent robots while others gloated about the possibility of employers utilizing an army of low-cost robots to make a fortune.

A simple use case

Industry experts said the digital anchor system is based on images of real people and possibly animated parts of their mouths and faces, with machine-learning technology recreating humanlike speech patterns and facial movements. It then uses a synthesized voice for the delivery of the news broadcast.

The creation showcases China’s progress in voice recognition, text mining and semantic analysis, all of which is covered by natural language processing, according to Liu Chien-chih, secretary-general of Asia IoT Alliance (AIOTA).

But that’s just one of many aspects of AI technologies, he wrote in an email to VOA.

Given the pace of experimental AI adoption by Chinese businesses, more user scenarios or designs of user interface can be anticipated in China, Liu added.

Chris Dong, director of China research at the market intelligence firm IDC, agreed the digital anchor is as simple as what he calls a “use case” for AI-powered services to attract commercials and audiences.

He said, in an email to VOA, that China has fast-tracked its big data advantage around consumers or internet of things (IoT) infrastructure to add commercial value.

Artificial Intelligence has also allowed China to accelerate its digital transformation across various industries or value chains, which are made smarter and more efficient, Dong added.

Far from a threat to the US

But both said China is far from a threat to challenge U.S. leadership on AI given its lack of an open market and respect for intellectual property rights (IPRs) as well as its lagging innovative competency on core AI technologies.

Earlier, Lee Kai-fu, a well-known venture capitalist who led Google before it pulled out of China, was quoted by news website Tech Crunch as saying that the United States may have created Artificial Intelligence, but China is taking the ball and running with it when it comes to one of the world’s most pivotal technology innovations.

Lee summed up four major drivers behind his observation that China is beating the United States in AI: abundant data, hungry entrepreneurs, growing AI expertise and massive government support and funding.

Beijing has set a goal to become an AI superpower by 2030, and to turn the sector into a $150 billion industry.

Yet, IDC’s Dong cast doubts on AI’s adoption rate and effectiveness in China’s traditional sectors. Some, such as the manufacturing sector, is worsening, he said.

He said China’s “state capitalism may have its short-term efficiency and gain, but over the longer-term, it is the open market that is fundamental to building an effective innovation ecosystem.”

The analyst urges China to open up and include multinational software and services to contribute to its digital economic transformation.

“China’s ‘Made-in-China 2025’ should go back to the original flavor … no longer Made and Controlled by Chinese, but more [of] an Open Platform of Made-in-China that both local and foreign players have a level-playing field,” he said.

In addition to a significant gap in core technologies, China’s failure to uphold IPRs will go against its future development of AI software, “which is often sold many-fold in the U.S. than in China as the Chinese tend to think intangible assets are free,” AIOTA’s Liu said.

US Lawmaker Says Facebook Cannot Be Trusted to Regulate Itself

Democratic U.S. Representative David Cicilline, expected to become the next chairman of House Judiciary Committee’s antitrust panel, said on Wednesday that Facebook cannot be trusted to regulate itself and Congress should take action.

Cicilline, citing a report in the New York Times on Facebook’s efforts to deal with a series of crises, said on Twitter: “This staggering report makes clear that @Facebook executives will always put their massive profits ahead of the interests of their customers.”

“It is long past time for us to take action,” he said. Facebook did not immediately respond to a request for comment.

Facebook Chief Executive Mark Zuckerberg said a year ago that the company would put its “community” before profit, and it has doubled its staff focused on safety and security issues since then. Spending also has increased on developing automated tools to catch propaganda and material that violates the company’s posting policies.

​Other initiatives have brought increased transparency about the administrators of pages and purchasers of ads on Facebook. Some critics, including lawmakers and users, still contend that Facebook’s bolstered systems and processes are prone to errors and that only laws will result in better performance. The New York Times said Zuckerberg and the company’s chief operating officer, Sheryl Sandberg, ignored warning signs that the social media company could be “exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe.” And when the warning signs became evident, they “sought to conceal them from public view.”

“We’ve known for some time that @Facebook chose to turn a blind eye to the spread of hate speech and Russian propaganda on its platform,” said Cicilline, who will likely take the reins of the subcommittee on regulatory reform, commercial and antitrust law when the new, Democratic-controlled Congress is seated in January.

“Now we know that once they knew the truth, top @Facebook executives did everything they could to hide it from the public by using a playbook of suppressing opposition and propagating conspiracy theories,” he said.

“Next January, Congress should get to work enacting new laws to hold concentrated economic power to account, address the corrupting influence of corporate money in our democracy, and restore the rights of Americans,” Cicilline said.

US Lawmaker Says Facebook Cannot Be Trusted to Regulate Itself

Democratic U.S. Representative David Cicilline, expected to become the next chairman of House Judiciary Committee’s antitrust panel, said on Wednesday that Facebook cannot be trusted to regulate itself and Congress should take action.

Cicilline, citing a report in the New York Times on Facebook’s efforts to deal with a series of crises, said on Twitter: “This staggering report makes clear that @Facebook executives will always put their massive profits ahead of the interests of their customers.”

“It is long past time for us to take action,” he said. Facebook did not immediately respond to a request for comment.

Facebook Chief Executive Mark Zuckerberg said a year ago that the company would put its “community” before profit, and it has doubled its staff focused on safety and security issues since then. Spending also has increased on developing automated tools to catch propaganda and material that violates the company’s posting policies.

​Other initiatives have brought increased transparency about the administrators of pages and purchasers of ads on Facebook. Some critics, including lawmakers and users, still contend that Facebook’s bolstered systems and processes are prone to errors and that only laws will result in better performance. The New York Times said Zuckerberg and the company’s chief operating officer, Sheryl Sandberg, ignored warning signs that the social media company could be “exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe.” And when the warning signs became evident, they “sought to conceal them from public view.”

“We’ve known for some time that @Facebook chose to turn a blind eye to the spread of hate speech and Russian propaganda on its platform,” said Cicilline, who will likely take the reins of the subcommittee on regulatory reform, commercial and antitrust law when the new, Democratic-controlled Congress is seated in January.

“Now we know that once they knew the truth, top @Facebook executives did everything they could to hide it from the public by using a playbook of suppressing opposition and propagating conspiracy theories,” he said.

“Next January, Congress should get to work enacting new laws to hold concentrated economic power to account, address the corrupting influence of corporate money in our democracy, and restore the rights of Americans,” Cicilline said.

Soft Wearable Tech is Helping People Move

Robots with rigid metal frames are being used to help the paralyzed walk and have applications that could one day grant military fighters extra power on the battlefield. The problem is that they’re uncomfortable and heavy. But researchers at Harvard University are working on lighter, flexible devices that move easily and don’t weigh much. VOA’s Kevin Enochs reports.

Soft Wearable Tech is Helping People Move

Robots with rigid metal frames are being used to help the paralyzed walk and have applications that could one day grant military fighters extra power on the battlefield. The problem is that they’re uncomfortable and heavy. But researchers at Harvard University are working on lighter, flexible devices that move easily and don’t weigh much. VOA’s Kevin Enochs reports.

Nigerian Firm Takes Blame for Routing Google Traffic Through China

Nigeria’s Main One Cable took responsibility Tuesday for a glitch that temporarily caused some Google global traffic to be misrouted through China, saying it accidentally caused the problem during a network 

upgrade. 

The issue surfaced Monday afternoon as internet monitoring firms ThousandEyes and BGPmon said some traffic to Alphabet’s Google had been routed through China and Russia, raising concerns that the communications had been intentionally hijacked. 

Main One said in an email that it had caused a 74-minute glitch by misconfiguring a border gateway protocol filter used to route traffic across the internet. That resulted in some Google traffic being sent through Main One partner China Telecom, the West African firm said. 

Google has said little about the matter. It acknowledged the problem Monday in a post on its website that said it was investigating the glitch and that it believed the problem originated outside the company. The company did not say how many users were affected or identify specific customers. 

Google representatives could not be reached Tuesday to comment on Main One’s statement. 

Hacking concerns

Even though Main One said it was to blame, some security experts said the incident highlighted concerns about the potential for hackers to conduct espionage or disrupt communications by exploiting known vulnerabilities in the way traffic is routed over the internet. 

The U.S. China Economic and Security Review Commission, a Washington group that advises the U.S. Congress on security issues, plans to investigate the issue, said Commissioner Michael Wessel. 

“We will work to gain more facts about what has happened recently and look at what legal tools or legislation or law enforcement activities can help address this problem,” Wessel said. 

Glitches in border gateway protocol filters have caused multiple outages to date, including cases in which traffic from U.S. internet and financial services firms was routed through Russia, China and Belarus. 

Yuval Shavitt, a network security researcher at Tel Aviv University, said it was possible that Monday’s issue was not an accident. 

“You can always claim that this is some kind of configuration error,” said Shavitt, who last month co-authored a paper alleging that the Chinese government had conducted a series of internet hijacks. 

Main One, which describes itself as a leading provider of telecom and network services for businesses in West Africa, said that it had investigated the matter and implemented new processes to prevent it from happening again. 

Nigerian Firm Takes Blame for Routing Google Traffic Through China

Nigeria’s Main One Cable took responsibility Tuesday for a glitch that temporarily caused some Google global traffic to be misrouted through China, saying it accidentally caused the problem during a network 

upgrade. 

The issue surfaced Monday afternoon as internet monitoring firms ThousandEyes and BGPmon said some traffic to Alphabet’s Google had been routed through China and Russia, raising concerns that the communications had been intentionally hijacked. 

Main One said in an email that it had caused a 74-minute glitch by misconfiguring a border gateway protocol filter used to route traffic across the internet. That resulted in some Google traffic being sent through Main One partner China Telecom, the West African firm said. 

Google has said little about the matter. It acknowledged the problem Monday in a post on its website that said it was investigating the glitch and that it believed the problem originated outside the company. The company did not say how many users were affected or identify specific customers. 

Google representatives could not be reached Tuesday to comment on Main One’s statement. 

Hacking concerns

Even though Main One said it was to blame, some security experts said the incident highlighted concerns about the potential for hackers to conduct espionage or disrupt communications by exploiting known vulnerabilities in the way traffic is routed over the internet. 

The U.S. China Economic and Security Review Commission, a Washington group that advises the U.S. Congress on security issues, plans to investigate the issue, said Commissioner Michael Wessel. 

“We will work to gain more facts about what has happened recently and look at what legal tools or legislation or law enforcement activities can help address this problem,” Wessel said. 

Glitches in border gateway protocol filters have caused multiple outages to date, including cases in which traffic from U.S. internet and financial services firms was routed through Russia, China and Belarus. 

Yuval Shavitt, a network security researcher at Tel Aviv University, said it was possible that Monday’s issue was not an accident. 

“You can always claim that this is some kind of configuration error,” said Shavitt, who last month co-authored a paper alleging that the Chinese government had conducted a series of internet hijacks. 

Main One, which describes itself as a leading provider of telecom and network services for businesses in West Africa, said that it had investigated the matter and implemented new processes to prevent it from happening again. 

Nigerian Firm Takes Blame for Routing Google Traffic Through China

Nigeria’s Main One Cable took responsibility Tuesday for a glitch that temporarily caused some Google global traffic to be misrouted through China, saying it accidentally caused the problem during a network 

upgrade. 

The issue surfaced Monday afternoon as internet monitoring firms ThousandEyes and BGPmon said some traffic to Alphabet’s Google had been routed through China and Russia, raising concerns that the communications had been intentionally hijacked. 

Main One said in an email that it had caused a 74-minute glitch by misconfiguring a border gateway protocol filter used to route traffic across the internet. That resulted in some Google traffic being sent through Main One partner China Telecom, the West African firm said. 

Google has said little about the matter. It acknowledged the problem Monday in a post on its website that said it was investigating the glitch and that it believed the problem originated outside the company. The company did not say how many users were affected or identify specific customers. 

Google representatives could not be reached Tuesday to comment on Main One’s statement. 

Hacking concerns

Even though Main One said it was to blame, some security experts said the incident highlighted concerns about the potential for hackers to conduct espionage or disrupt communications by exploiting known vulnerabilities in the way traffic is routed over the internet. 

The U.S. China Economic and Security Review Commission, a Washington group that advises the U.S. Congress on security issues, plans to investigate the issue, said Commissioner Michael Wessel. 

“We will work to gain more facts about what has happened recently and look at what legal tools or legislation or law enforcement activities can help address this problem,” Wessel said. 

Glitches in border gateway protocol filters have caused multiple outages to date, including cases in which traffic from U.S. internet and financial services firms was routed through Russia, China and Belarus. 

Yuval Shavitt, a network security researcher at Tel Aviv University, said it was possible that Monday’s issue was not an accident. 

“You can always claim that this is some kind of configuration error,” said Shavitt, who last month co-authored a paper alleging that the Chinese government had conducted a series of internet hijacks. 

Main One, which describes itself as a leading provider of telecom and network services for businesses in West Africa, said that it had investigated the matter and implemented new processes to prevent it from happening again. 

NATO Looks to Startups, Disruptive Tech to Meet Emerging Threats 

NATO is developing new high-tech tools, such as the ability to 3-D-print parts for weapons and deliver them by drone, as it scrambles to retain a competitive edge over Russia, China and other would-be battlefield adversaries. 

Gen. Andre Lanata, who took over as head of the NATO transformation command in September, told a conference in Berlin that his command demonstrated over 21 “disruptive” projects during military exercises in Norway this month. 

He urged startups as well as traditional arms manufacturers to work with the Atlantic alliance to boost innovation, as rapid and easy access to emerging technologies was helping adversaries narrow NATO’s long-standing advantage. 

Lanata’s command hosted its third “innovation challenge” in tandem with the conference this week, where 10 startups and smaller firms presented ideas for defeating swarms of drones on the ground and in the air. 

Winner from Belgium

Belgian firm ALX Systems, which builds civilian surveillance drones, won this year’s challenge.

Its CEO, Geoffrey Mormal, said small companies like his often struggled with cumbersome weapons procurement processes. 

“It’s a very hot topic, so perhaps it will help to enable quicker decisions,” he told Reuters. 

Lanata said NATO was focused on areas such as artificial intelligence, connectivity, quantum computing, big data and hypervelocity, but also wants to learn from DHL and others how to improve the logistics of moving weapons and troops. 

NATO Secretary-General Jens Stoltenberg said increasing military spending by NATO members would help tackle some of the challenges, but efforts were also needed to reduce widespread duplication and fragmentation in the European defense sector. 

Participants also met behind closed doors with chief executives from 12 of the 15 biggest arms makers in Europe. 

NATO Looks to Startups, Disruptive Tech to Meet Emerging Threats 

NATO is developing new high-tech tools, such as the ability to 3-D-print parts for weapons and deliver them by drone, as it scrambles to retain a competitive edge over Russia, China and other would-be battlefield adversaries. 

Gen. Andre Lanata, who took over as head of the NATO transformation command in September, told a conference in Berlin that his command demonstrated over 21 “disruptive” projects during military exercises in Norway this month. 

He urged startups as well as traditional arms manufacturers to work with the Atlantic alliance to boost innovation, as rapid and easy access to emerging technologies was helping adversaries narrow NATO’s long-standing advantage. 

Lanata’s command hosted its third “innovation challenge” in tandem with the conference this week, where 10 startups and smaller firms presented ideas for defeating swarms of drones on the ground and in the air. 

Winner from Belgium

Belgian firm ALX Systems, which builds civilian surveillance drones, won this year’s challenge.

Its CEO, Geoffrey Mormal, said small companies like his often struggled with cumbersome weapons procurement processes. 

“It’s a very hot topic, so perhaps it will help to enable quicker decisions,” he told Reuters. 

Lanata said NATO was focused on areas such as artificial intelligence, connectivity, quantum computing, big data and hypervelocity, but also wants to learn from DHL and others how to improve the logistics of moving weapons and troops. 

NATO Secretary-General Jens Stoltenberg said increasing military spending by NATO members would help tackle some of the challenges, but efforts were also needed to reduce widespread duplication and fragmentation in the European defense sector. 

Participants also met behind closed doors with chief executives from 12 of the 15 biggest arms makers in Europe. 

5G is Coming, Get Ready

If you’re really lucky and live in the U.S. cities of Houston, Indianapolis, Los Angeles or Sacramento, you now have access to a 5G network. If you live anywhere else, just be patient… a 5G mobile network is coming your way, and it’s already arriving in some countries. VOA’s Kevin Enochs reports.

Media: German States Want Social Media Law Tightened

German states have drafted a list of demands aimed at tightening a law that requires social media companies like Facebook and Twitter to remove hate speech from their sites, the Handelblatt newspaper reported Monday.

Justice ministers from the states will submit their proposed revisions to the German law called NetzDG at a meeting with Justice Minister Katarina Barley on Thursday, the newspaper said, saying it had obtained a draft of the document.

The law, which came into full force on Jan. 1, is a highly ambitious effort to control what appears on social media and it has drawn a range of criticism.

While the German states are focused on concerns about how complaints are processed, other officials have called for changes following criticism that too much content was being blocked.

The states’ justice ministers are calling for changes that would make it easier for people who want to complain about banned content such as pro-Nazi ideology to find the required forms on social media platforms.

They also want to fine social media companies up to 500,000 euros ($560,950) for providing “meaningless replies” to queries from law enforcement authorities, the newspaper said.

Till Steffen, the top justice official in Hamburg and a member of the Greens party, told the newspaper that the law had in some cases proven to be “a paper tiger.”

“If we want to effectively limit hate and incitement on the internet, we have to give the law more bite and close the loopholes,” he told the paper. “For instance, it cannot be the case that some platforms hide their complaint forms so that no one can find them.”

Facebook in July said it had deleted hundreds of offensive posts since implementation of the law, which foresees fines of up to 50 million euros ($56.10 million) for failure to comply.

Media: German States Want Social Media Law Tightened

German states have drafted a list of demands aimed at tightening a law that requires social media companies like Facebook and Twitter to remove hate speech from their sites, the Handelblatt newspaper reported Monday.

Justice ministers from the states will submit their proposed revisions to the German law called NetzDG at a meeting with Justice Minister Katarina Barley on Thursday, the newspaper said, saying it had obtained a draft of the document.

The law, which came into full force on Jan. 1, is a highly ambitious effort to control what appears on social media and it has drawn a range of criticism.

While the German states are focused on concerns about how complaints are processed, other officials have called for changes following criticism that too much content was being blocked.

The states’ justice ministers are calling for changes that would make it easier for people who want to complain about banned content such as pro-Nazi ideology to find the required forms on social media platforms.

They also want to fine social media companies up to 500,000 euros ($560,950) for providing “meaningless replies” to queries from law enforcement authorities, the newspaper said.

Till Steffen, the top justice official in Hamburg and a member of the Greens party, told the newspaper that the law had in some cases proven to be “a paper tiger.”

“If we want to effectively limit hate and incitement on the internet, we have to give the law more bite and close the loopholes,” he told the paper. “For instance, it cannot be the case that some platforms hide their complaint forms so that no one can find them.”

Facebook in July said it had deleted hundreds of offensive posts since implementation of the law, which foresees fines of up to 50 million euros ($56.10 million) for failure to comply.