Category Archives: Technology

silicon valley & technology news

Report: Russia Has Access to UK Visa Processing

Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications.

The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas.

The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain.

The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center’s CCTV cameras and had a diagram of the center’s computer network. The two outlets say they have obtained the man’s deposition to the U.S. authorities but have decided against publishing the man’s name, for his own safety.

The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year.

The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications.

The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a “backdoor” to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017.

In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents’ real names and the media, including The Associated Press, were able to corroborate their real identities.

The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.

$1*/ mo hosting! Get going with us!

Report: Russia Has Access to UK Visa Processing

Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications.

The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas.

The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain.

The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center’s CCTV cameras and had a diagram of the center’s computer network. The two outlets say they have obtained the man’s deposition to the U.S. authorities but have decided against publishing the man’s name, for his own safety.

The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year.

The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications.

The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a “backdoor” to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017.

In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents’ real names and the media, including The Associated Press, were able to corroborate their real identities.

The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.

$1*/ mo hosting! Get going with us!

Tech Firm Pays Refugees to Train AI Algorithms

Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday.

REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI.

Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency.

People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London.

“This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues.

$20 a day for AI work

A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said.

REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts.

The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen.

The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says.

“This would give them the ability to rebuild a life … and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation.

Teaching the machines

AI is the development of computer systems that can perform tasks that normally require human intelligence.

It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers.

In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention.

REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years.

Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs.

Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.

$1*/ mo hosting! Get going with us!

Tech Firm Pays Refugees to Train AI Algorithms

Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday.

REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI.

Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency.

People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London.

“This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues.

$20 a day for AI work

A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said.

REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts.

The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen.

The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says.

“This would give them the ability to rebuild a life … and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation.

Teaching the machines

AI is the development of computer systems that can perform tasks that normally require human intelligence.

It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers.

In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention.

REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years.

Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs.

Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.

$1*/ mo hosting! Get going with us!

Facebook CEO Details Company Battle with Hate Speech, Violent Content

Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site.

The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions.

In the future, if a person disagrees with Facebook’s decision, he or she will be able to appeal to an independent review board.

Facebook “shouldn’t be making so many important decisions about free expression and safety on our own,” Facebook CEO Mark Zuckerberg said in a call with reporters Thursday.

But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go.

“There are issues you never fix,” he said. “There’s going to be ongoing content issues.”

Company’s actions

In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. 

The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out.

Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. “But to suggest that we didn’t want to know is simply untrue,” he said.

Zuckerberg also said he didn’t know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm.

“It may be normal in Washington, but it’s not the kind of thing I want Facebook associated with, which is why we won’t be doing it,” Zuckerberg said.

The firm posted a rebuttal to the Times story.

Content removed

Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months.

But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said.

One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls “borderline content” so it gets less distribution and engagement.

“By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less-polarized discourse where more people feel safe participating,” Zuckerberg wrote in a post. 

Critics of the company, however, said Zuckerberg hasn’t gone far enough to address the inherent problems of Facebook, which has 2 billion users.

“We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality,” said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C.

VOA national security correspondent Jeff Seldin contributed to this report.

$1*/ mo hosting! Get going with us!

Facebook CEO Details Company Battle with Hate Speech, Violent Content

Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site.

The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions.

In the future, if a person disagrees with Facebook’s decision, he or she will be able to appeal to an independent review board.

Facebook “shouldn’t be making so many important decisions about free expression and safety on our own,” Facebook CEO Mark Zuckerberg said in a call with reporters Thursday.

But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go.

“There are issues you never fix,” he said. “There’s going to be ongoing content issues.”

Company’s actions

In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. 

The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out.

Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. “But to suggest that we didn’t want to know is simply untrue,” he said.

Zuckerberg also said he didn’t know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm.

“It may be normal in Washington, but it’s not the kind of thing I want Facebook associated with, which is why we won’t be doing it,” Zuckerberg said.

The firm posted a rebuttal to the Times story.

Content removed

Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months.

But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said.

One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls “borderline content” so it gets less distribution and engagement.

“By fixing this incentive problem in our services, we believe it’ll create a virtuous cycle: by reducing sensationalism of all forms, we’ll create a healthier, less-polarized discourse where more people feel safe participating,” Zuckerberg wrote in a post. 

Critics of the company, however, said Zuckerberg hasn’t gone far enough to address the inherent problems of Facebook, which has 2 billion users.

“We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality,” said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C.

VOA national security correspondent Jeff Seldin contributed to this report.

$1*/ mo hosting! Get going with us!

Realistic Masks Made in Japan Find Demand from Tech, Car Companies

Super-realistic face masks made by a tiny company in rural Japan are in demand from the domestic tech and entertainment industries and from countries as far away as Saudi Arabia.

The 300,000-yen ($2,650) masks, made of resin and plastic by five employees at REAL-f Co., attempt to accurately duplicate an individual’s face down to fine wrinkles and skin texture.

Company founder Osamu Kitagawa came up with the idea while working at a printing machine manufacturer.

But it took him two years of experimentation before he found a way to use three-dimensional facial data from high-quality photographs to make the masks, and started selling them in 2011.

The company, based in the western prefecture of Shiga, receives about 100 orders every year from entertainment, automobile, technology and security companies, mainly in Japan.

For example, a Japanese car company ordered a mask of a sleeping face to improve its facial recognition technology to detect if a driver had dozed off, Kitagawa said.

“I am proud that my product is helping further development of facial recognition technology,” he added. “I hope that the developers would enhance face identification accuracy using these realistic masks.”

Kitagawa, 60, said he had also received orders from organizations linked to the Saudi government to create masks for the king and princes.

“I was told the masks were for portraits to be displayed in public areas,” he said.

Kitagawa said he works with clients carefully to ensure his products will not be used for illicit purposes and cause security risks, but added he could not rule out such threats.

He said his goal was to create 100 percent realistic masks, and he hoped to use softer materials, such as silicon, in the future.

“I would like these masks to be used for medical purposes, which is possible once they can be made using soft materials,” he said. “And as humanoid robots are being developed, I hope this will help developers to create [more realistic robots] at a low cost.”

$1*/ mo hosting! Get going with us!

Debut of China AI Anchor Stirs up Tech Race Debates

China’s state-run Xinhua News has debuted what it called the world’s first artificial intelligence (AI) anchor. But the novelty has generated more dislikes than likes online among Chinese netizens, with many calling the new virtual host “a news-reading device without a soul.”

Analysts say the latest creation has showcased China’s short-term progress in voice recognition, text mining and semantic analysis, but challenges remain ahead for its long-term ambition of becoming an AI superpower by 2030.

Nonhuman anchors

Collaborating with Chinese search engine Sogou, Xinhua introduced two AI anchors, one for English broadcasts and the other for Chinese, both of which are based on images of the agency’s real newscasters, Zhang Zhao and Qiu Hao respectively.

In its inaugural broadcast last week, the English-speaking anchor was more tech cheerleader than newshound, rattling off lines few anchors would be caught dead reading, such as: “the development of the media industry calls for continuous innovation and deep integration with the international advanced technologies.”

It also promised “to work tirelessly to keep you [audience] informed as texts will be typed into my system uninterrupted” 24/7 across multiple platforms simultaneously if necessary, according to the news agency.

No soul

Local audiences appear to be unimpressed, critiquing the news bots’ not so human touch and synthesized voices.

On Weibo, China’s Twitterlike microblogging platform, more than one user wrote that such anchors have “no soul,” in response to Xinhua’s announcement. And one user joked: “what if we have an AI [country] leader?” while another questioned what it stands for in terms of journalistic values by saying “What a nutcase. Fake news is on every day.”

Others pondered the implication AI news bots might have on employment and workers.

“It all comes down to production costs, which will determine if [we] lose jobs,” one Weibo user wrote. Some argued that only low-end labor-intensive jobs will be easily replaced by intelligent robots while others gloated about the possibility of employers utilizing an army of low-cost robots to make a fortune.

A simple use case

Industry experts said the digital anchor system is based on images of real people and possibly animated parts of their mouths and faces, with machine-learning technology recreating humanlike speech patterns and facial movements. It then uses a synthesized voice for the delivery of the news broadcast.

The creation showcases China’s progress in voice recognition, text mining and semantic analysis, all of which is covered by natural language processing, according to Liu Chien-chih, secretary-general of Asia IoT Alliance (AIOTA).

But that’s just one of many aspects of AI technologies, he wrote in an email to VOA.

Given the pace of experimental AI adoption by Chinese businesses, more user scenarios or designs of user interface can be anticipated in China, Liu added.

Chris Dong, director of China research at the market intelligence firm IDC, agreed the digital anchor is as simple as what he calls a “use case” for AI-powered services to attract commercials and audiences.

He said, in an email to VOA, that China has fast-tracked its big data advantage around consumers or internet of things (IoT) infrastructure to add commercial value.

Artificial Intelligence has also allowed China to accelerate its digital transformation across various industries or value chains, which are made smarter and more efficient, Dong added.

Far from a threat to the US

But both said China is far from a threat to challenge U.S. leadership on AI given its lack of an open market and respect for intellectual property rights (IPRs) as well as its lagging innovative competency on core AI technologies.

Earlier, Lee Kai-fu, a well-known venture capitalist who led Google before it pulled out of China, was quoted by news website Tech Crunch as saying that the United States may have created Artificial Intelligence, but China is taking the ball and running with it when it comes to one of the world’s most pivotal technology innovations.

Lee summed up four major drivers behind his observation that China is beating the United States in AI: abundant data, hungry entrepreneurs, growing AI expertise and massive government support and funding.

Beijing has set a goal to become an AI superpower by 2030, and to turn the sector into a $150 billion industry.

Yet, IDC’s Dong cast doubts on AI’s adoption rate and effectiveness in China’s traditional sectors. Some, such as the manufacturing sector, is worsening, he said.

He said China’s “state capitalism may have its short-term efficiency and gain, but over the longer-term, it is the open market that is fundamental to building an effective innovation ecosystem.”

The analyst urges China to open up and include multinational software and services to contribute to its digital economic transformation.

“China’s ‘Made-in-China 2025’ should go back to the original flavor … no longer Made and Controlled by Chinese, but more [of] an Open Platform of Made-in-China that both local and foreign players have a level-playing field,” he said.

In addition to a significant gap in core technologies, China’s failure to uphold IPRs will go against its future development of AI software, “which is often sold many-fold in the U.S. than in China as the Chinese tend to think intangible assets are free,” AIOTA’s Liu said.

$1*/ mo hosting! Get going with us!
1 2 3 136