Category Archives: Technology

silicon valley & technology news

US Charges Six Russian Agents in Global Cyber Attack

U.S. prosecutors have charged six Russian military intelligence officers in connection with a global computer malware campaign that struck the 2017 French presidential election and the 2018 Winter Olympics in South Korea among other targets.  The cyber campaign represented “the most disruptive and destructive series of computer attacks ever attributed to a single group,” said John C. Demers, head of the Justice Department’s national security division. “No country has weaponized its cyber capabilities as maliciously or irresponsibly as Russia, wantonly causing unprecedented damage to pursue small tactical advantages and to satisfy fits of spite,” Demers said Monday at a news conference. The six hackers, all officers of the Russian military intelligence service known as GRU, “engaged in computer intrusions and attacks intended to support Russian government efforts to undermine, retaliate against, or otherwise destabilize” targets around the world, the Justice Department said.    TargetsThese included Ukrainian government and critical infrastructure; Georgian companies and government entities; the elections in France; an investigation into Russia’s poisoning of former spy Sergei Skripal in Britain; and the Winter Olympics in PyeongChang, the Justice Department said.In addition, the hackers, using the NotPetya malware, struck hospitals and medical facilities in the Heritage Valley Health System in Pennsylvania, a FedEx Corporation subsidiary and an unidentified U.S pharmaceutical manufacturer.   The Justice Department had previously indicted GRU officers with hacking Democratic emails during the 2016 presidential election.  The latest charges do not allege election interference on the part of the GRU.The six defendants were identified as Yuriy Sergeyevich Andrienko, Sergey Vladimirovich Detistov, Pavel Valeryevich Frolov, Anatoliy Sergeyevich Kovalev, Artem Valeryevich Ochichenko, and Petr Nikolayevich Pliskin They face charges of conspiracy, computer hacking, wire fraud, aggravated identity theft, and false registration of a domain name. 

Twitter Blocks Tweet About Masks From White House Coronavirus Team Adviser

Dr. Scott Atlas is a neuroradiologist, a fellow at a conservative-leaning think tank, a science adviser to President Donald Trump and a member of the White House Coronavirus Task Force. He is also the latest person in Trump’s world to have a tweet blocked by Twitter. Facebook to Ban Anti-Vaccine AdsThe social media giant says the efforts are part of an attempt to support vaccinesOver the weekend, Atlas tweeted “Masks work? NO,” and said widespread use of masks is not supported, according to the Associated Press. Twitter told the AP that the tweet violated its policy that prohibits false and misleading information about COVID-19 that could lead to harm. The “This Tweet is unavailable” label was put on Atlas’ Twitter feed where his tweet once was.Atlas followed up with another tweet, which remained on the site as of Sunday night. He praised what he called Trump’s “guideline,” which is to “use masks for their intended purpose – when close to others, especially hi risk. Otherwise, social distance. No widespread mandates.” That means the right policy is @realDonaldTrump guideline: use masks for their intended purpose – when close to others, especially hi risk. Otherwise, social distance. No widespread mandates. #CommonSensehttps://t.co/GZpBZxfNYa— Scott W. Atlas (@SWAtlasHoover) October 17, 2020The deletion of Atlas’ tweet is the latest in what has become an ongoing battle between Trump and internet companies. Twitter has blocked or put warnings on Trump’s tweets regarding COVID-19, the disease caused by the coronavirus, as well as vote-by-mail. Last week, Twitter temporarily blocked the Trump campaign’s ability to share a story about his presidential challenger, former Vice President Joe Biden. Some congressional leaders accuse Twitter, Facebook and other internet companies of bias and say they are unfairly limiting speech close to the U.S. election. Some have called for the leaders of Twitter and Facebook, which has also taken action on some of Trump’s posts, to testify in front of Congress as soon as the coming week. Twitter told the AP it relies on public health authorities to determine whether a statement is false or misleading.In September, Dr. Robert Redfield, the director of the Centers for Disease Control and Prevention testified at a congressional hearing that masks are “the most powerful public health tool” against the coronavirus.Atlas, a fellow at the Hoover Institution at Stanford University, joined the White House task force in August. A medical doctor, Atlas does not have a background in infectious diseases or public health. He is reportedly helping to shape the White House policies about how to handle the virus, including policies about masks and other issues. Atlas told the AP that Twitter’s actions were censorship. “General population masks and mask mandates do not work,” he said.

YouTube Follows Twitter And Facebook With QAnon Crackdown

YouTube is following the lead of Twitter and Facebook, saying that it is taking more steps to limit QAnon and other baseless conspiracy theories that can lead to real-world violence.
The Google-owned video platform said Thursday it will now prohibit material targeting a person or group with conspiracy theories that have been used to justify violence.  
One example would be videos that threaten or harass someone by suggesting they are complicit in a conspiracy such as QAnon, which paints President Donald Trump as a secret warrior against a supposed child-trafficking ring run by celebrities and “deep state” government officials.
Pizzagate is another internet conspiracy theory — essentially a predecessor to QAnon — that would fall in the banned category. Its promoters claimed children were being harmed at a pizza restaurant in Washington. D.C. A man who believed in the conspiracy entered the restaurant in December 2016 and fired an assault rifle. He was sentenced to prison in 2017.
YouTube is the third of the major social platforms to announce policies intended rein in QAnon, a conspiracy theory they all helped spread.  
Twitter announced in July a crackdown on QAnon, though it did not ban its supporters from its platform. It did ban thousands of accounts associated with QAnon content and blocked URLs associated with it from being shared. Twitter also said that it would stop highlighting and recommending tweets associated with QAnon.  
Facebook, meanwhile, announced last week that it was banning groups that openly support QAnon. It said it would remove pages, groups and Instagram accounts for representing QAnon — even if they don’t promote violence.  
The social network said it will consider a variety of factors in deciding whether a group meets its criteria for a ban. Those include the group’s name, its biography or “about” section, and discussions within the page or group on Facebook, or account on Instagram, which is owned by Facebook.
Facebook’s move came two months after it announced softer crackdown, saying said it would stop promoting the group and its adherents. But that effort faltered due to spotty enforcement.  
YouTube said it had already removed tens of thousands of QAnon-videos and eliminated hundreds of channels under its existing policies — especially those that explicitly threaten violence or deny the existence of major violent events.  
“All of this work has been pivotal in curbing the reach of harmful conspiracies, but there’s even more we can do to address certain conspiracy theories that are used to justify real-world violence, like QAnon,” the company said in Thursday’s  blog post.  
Experts said the move shows that YouTube is taking threats around violent conspiracy theories seriously and recognizes the importance of limiting the spread of such conspiracies. But, with QAnon increasingly creeping into mainstream politics and U.S. life, they wonder if it is too late.  
“While this is an important change, for almost three years YouTube was a primary site for the spread of QAnon,” said Sophie Bjork-James, an anthropologist at Vanderbilt University who studies QAnon. “Without the platform Q would likely remain an obscure conspiracy. For years YouTube provided this radical group an international audience.”

In Blocking Tweets, Is Twitter Protecting the Election or Interfering?

The decision by Twitter to block the dissemination of a story on its site about Hunter Biden, the son of former Vice President Joe Biden, has added to an already heated discussion in the U.S. about whether internet companies have too much power and are making decisions that could affect the U.S. elections.Some have applauded Twitter’s move as a stand against misinformation. Others have criticized Twitter’s decision as biased, curtailing speech in a way that could affect the outcome of the U.S. election.In recent weeks, Twitter, Facebook and Google, the owner of YouTube, have increasingly taken steps to restrict the spread of what they describe as misinformation and extremist speech on their sites. After the 2016 U.S. election, internet companies were criticized for not doing enough to stop misinformation on their services.This week, Twitter blocked certain accounts on its site as they tried to share a story by the New York Post that cited supposed email exchanges between Hunter Biden and a Ukrainian official about setting up a meeting with Hunter Biden’s father when Joe Biden was the U.S. vice president. The story claimed to rely on records from a computer drive that was allegedly abandoned by Hunter Biden. Rudy Giuliani, lawyer to President Donald Trump, reportedly gave the drive to the Post.No meeting, campaign saysThe Biden campaign said it had “reviewed Joe Biden’s official schedules from the time and no meeting, as alleged by the New York Post, ever took place.””Investigations by the press, during impeachment, and even by two Republican-led Senate committees whose work was decried as ‘not legitimate’ and political by a GOP colleague, have all reached the same conclusion: that Joe Biden carried out official U.S. policy toward Ukraine and engaged in no wrongdoing,” said Andrew Bates, a spokesman for Biden.FILE – President Donald Trump holds up a copy of the New York Post as he speaks before signing an executive order aimed at curbing protections for social media giants, in the Oval Office of the White House, May 28, 2020.No tweeting, no sharingCiting the firm’s hacked-materials policy, Twitter blocked the Post’s ability to tweet about the story from its Twitter account. It also blocked the Trump campaign and other accounts from sharing the story.Facebook said it reduced the reach of the post, pending fact checking from third party fact-checkers.For Lisa Kaplan, chief executive of the Alethea Group, which tracks misinformation and online threats, Twitter’s recent decisions to block some posts are a good sign.“I do applaud Twitter’s efforts and the stances they have taken to address disinformation, making it so that people can’t share a link known to be false that could have potential implications on the election,” she said. “It’s an important step if they are truly going to be a source of accurate information for their users.”GOP respondsThe reaction from Republicans over the Post story has been swift. Senate Republicans said Thursday that they would subpoena Jack Dorsey, CEO of Twitter, to testify next week. Dorsey should “explain why Twitter is abusing their corporate power to silence the press,” said Senator Ted Cruz, a Texas Republican.Senator Josh Hawley, a Missouri Republican, said he had sent a letter to Dorsey and Mark Zuckerberg, CEO of Facebook, asking them to testify at a committee hearing.The companies’ decision about the Post stories throws fuel on an issue that has gained traction over the past year: whether companies are publishers, making editorial decisions, or “platforms,” places where people share information but with the companies providing little oversight of what’s said.FILE – FCC Chairman Ajit Pai testifies at a House subcommittee hearing on Capitol Hill in Washington, Dec. 5, 2019.Protections weighedCongressional leaders of both parties are considering whether to strip the companies of some of their legal protections that say they aren’t responsible for the speech on their sites. On Thursday, Republican Ajit Pai, chairman of the U.S. Federal Communications Commission, said the agency would consider weakening the legal protections the companies enjoy.Some Democrats as well have called for stripping the internet firms of some of their legal protections.With the decision about the Post story, Ken Paulson, director of the Free Speech Center at Middle Tennessee State University, says the internet firms have not moved closer to being publishers.“If you have a business and the last thing you want is untruthful stories, then you can say, ‘We’re uncomfortable to share this with millions of people globally.’ That’s your right,” Paulson said. “I don’t think we want to mistake Facebook or Twitter for a public utility. And I don’t think a simple ban on content you believe to be unreliable and fraudulent makes you a publisher.“A company has a right to decide what it stands for, and that’s where we are now with Twitter and Facebook,” he said.One thing is certain: With the internet firms making decisions almost daily about curtailing or blocking posts, lawmakers and regulators will have more fodder to point to for changing the rules.

Republicans to Subpoena Twitter CEO Over Blocking Article Attacking Biden 

Senate Republicans said Thursday they will subpoena Twitter chief executive Jack Dorsey over the decision to block a news report critical of Democratic presidential candidate Joe Biden. “This is election interference and we’re 19 days out from an election,” Senator Ted Cruz said, a day after the social network blocked links to the article by the New York Post alleging corruption by Biden in Ukraine. Cruz said the Senate Judiciary Committee would vote next Tuesday to subpoena Dorsey to testify at the end of next week and “explain why Twitter is abusing their corporate power to silence the press.” “The Senate Judiciary Committee wants to know what the hell is going on,” he said. “Twitter and Facebook and big tech millionaires don’t get to censor political speech and actively interfere in the election. That’s what they are doing right now.” Republican Senator Josh Hawley announced separately that he had sent letters to Dorsey and Facebook chief executive Mark Zuckerberg asking them to appear before his Judiciary Subcommittee on Crime and Terrorism. The hearing will “consider potential campaign law violations” in support of Biden with the blocking of the article.  The Post’s story purported to expose corrupt dealings by Biden and his son Hunter Biden in Ukraine. The newspaper claimed that the former vice president, who was in charge of U.S. policy toward Ukraine, took actions to help his son, who in 2014-2017 sat on the board of controversial Ukraine energy company Burisma. But the newspaper’s source for the information raised questions. It cited records on a drive allegedly copied from a computer said to have been abandoned by Hunter Biden, that Trump lawyer Rudy Giuliani gave to the Post. The report also made claims about Joe Biden’s actions in Ukraine, which were contrary to the record.   Wary of “fake news” campaigns, both Facebook and Twitter said they took action out of caution over the article and its sourcing. “This is part of our standard process to reduce the spread of misinformation,” said Facebook spokesman Andy Stone. The role of Giuliani, who has repeatedly advanced unproven and poorly sourced conspiracy theories about the Bidens and Ukraine, also raised flags. The Biden campaign rejected the assertions of corruption in the report but has not denied the veracity of the underlying materials, mostly emails between Hunter Biden and business partners. Trump, who trails Biden in polls 19 days before the presidential election, blasted the two social media giants on Wednesday. “So terrible that Facebook and Twitter took down the story of ‘Smoking Gun’ emails related to Sleepy Joe Biden and his son, Hunter, in the @NYPost,” Trump posted on Twitter. 

Report Tracks How Governments Fighting COVID Are Increasing Surveillance

Governments around the world have used the COVID-19 pandemic as their reason for expanding digital surveillance and collecting more data from their citizens, according to a report published Wednesday.The annual FILE – People wearing face masks to protect against the coronavirus use their smartphones to enter their personal data before being allowed to enter a pedestrian shopping street in Beijing, May 16, 2020.The report again singled out China for specific criticism as the world’s worst abuser of internet freedom, but Beijing also found new methods of digital surveillance in the pandemic.The report noted that Chinese authorities combined low- and high-tech tools not only to manage the outbreak of the coronavirus, but also to deter internet users from sharing information from independent sources and challenging the official narrative.The report concluded “the pandemic is normalizing the sort of digital authoritarianism that the Chinese Communist Party has long sought to mainstream.”“China’s government already was sitting on the most sophisticated and multilayered censorship and internet control apparatus around the world,” said Sarah Cook, a senior researcher at Freedom House.Technology spreadsShe added that what is unusual this year with COVID-19 is these tactics were being used regarding public health. Surveillance technology developed in the Xinjiang region — such as handheld devices for pulling data from citizens’ phones — is now proliferating in other parts of the country.There also are certain upgrades in these surveillance technologies, such as refining facial recognition technology to be able to identify people who are wearing masks or forcing people to use various color-coded health apps in China to track citizens’ infections.“These really don’t protect privacy and there are research initiatives that indicated that they even had a backdoor to the police,” Cook continued.FILE – A man holding a smartphone walks past the headquarters of Chinese state newspaper People’s Daily in Beijing, Oct. 6, 2018.In addition, Freedom House researchers say individuals around China also have reported pandemic-related intrusions, like being told to put webcams inside their houses and outside their doors for alleged quarantine enforcement.Apart from the heavy surveillance, Cook said the spread of COVID-19 is directly related to Chinese Communist Party speech controls on the internet.WeChat users
 
“The very thing we flagged last year as a problem in terms of monitoring of WeChat users and reprisals against WeChat users is exactly what happened to doctors like Li Wenliang, who initially tried to share information about this emerging SARS-like virus,” she said.“So, I think there’s really this very intimate connection between the outbreak overall and the fact that China is the worst abuser of internet freedom around the world.”Elsewhere in the world, Iceland is said to have the greatest internet freedom, followed by Estonia and Canada. The report listed U.S. in seventh place, with internet freedom worsening for the fourth year running. 

Apple Unveils New iPhones for Faster 5G Wireless Networks

Apple unveiled four new iPhones equipped with technology for use with faster new 5G wireless networks, hoping that demand for higher data speeds will spark demand for new phones.
That might not happen as quickly as Apple would like.
In a virtual presentation Tuesday, the company announced four 5G-enabled versions of the new iPhone 12 ranging in price from almost $700 to roughly $1,100. Apple also announced a new, less expensive version of its HomePod smart speaker.
Smartphone sales have been slowing for years as their technology has matured. That has meant far fewer gotta-have-it innovations that can drive demand and, at least until recently, increasingly pricey phones. Add to that pandemic-related economic crisis, and consumers have tended to eke as much life as possible out of their existing phones.
Apple, however, is clearly betting that 5G speeds could push many users off the fence. At its event, the company boasted about 5G capabilities and brought in Verizon CEO Hans Vestberg to champion the carrier’s network.
5G is supposed to mean much faster speeds, making it quicker to download movies or games, for instance. But finding those speeds can be a challenge. While telecom operators have been rolling out 5G networks, significant boosts in speed are still uncommon in much of the world, including the U.S. So far, there are no popular new consumer applications that require 5G.
Updates in the new phones mostly amount to “incremental improvements” over predecessor iPhones, technology analyst Patrick Moorhead said, referring to 5G capabilities and camera upgrades on higher-end phones. But he suggested that if carriers build out their 5G networks fast enough, it could launch a “supercycle” in which large numbers of people switch to 5G phones.
That might be a big if. Mobile expert Carolina Milanesi of the firm Creative Strategies said economic pain caused by the global pandemic and accompanying job losses could easily restrain that buying impulse.
Apple’s new models include the iPhone 12, which features a 6.1-inch display and starts at almost $800, and the iPhone 12 Mini, with a 5.4-inch display at almost $700. A higher-end iPhone 12 Pro with more powerful cameras will begin at roughly $1,000; the 12 Pro Max, with a 6.7-inch display, will set buyers back at least $1,100. Apple said the phones should be more durable.iPhone 12 Pro and iPhone 12 Pro Max feature a new, elevated flat-edge stainless steel design and Ceramic Shield front cover for increased durability.In a move that may annoy some consumers, Apple will no longer include charging adapters with new phones. It says that will mean smaller, lighter boxes that are more environmentally friendly to ship. Apple, however, separately sells power adapters that cost about $20 and $50, depending on how fast they charge phones.
The iPhone models unveiled Tuesday will launch at different times. The iPhone 12 and 12 Pro will be available starting Oct. 23; the Mini and the Pro Max will follow on Nov. 13.
That compresses Apple’s window for building up excitement heading into the key holiday season.
Although other parts of Apple’s business are now growing more rapidly, the iPhone remains the biggest business of a technology juggernaut currently worth about $2 trillion, nearly double its value when stay-at-home orders imposed in the U.S in mid-March plunged the economy into a deep recession.
The pandemic temporarily paralyzed Apple’s overseas factories and key suppliers, leading to a delay of the latest iPhones from their usual late September rollout. The company also closed many of its U.S. stores for months because of the pandemic, depriving Apple of a prime showcase for its products.
Apple on Tuesday also said it was shrinking the size and price of its HomePod speaker to catch up to Amazon and Google in the market for internet-connected speakers, where it has barely made a dent. Both Amazon and Google are trying to position their speakers, the Echo and the Nest, as low-cost command centers for helping people manage their homes and lives. They cost as little as $50, while the HomePod costs almost $300.
The new HomePod Mini will cost almost $100. It will integrate Apple’s own music service, of course, with Pandora and Amazon’s music service in “coming months.” Apple didn’t mention music-streaming giant Spotify. It will be available for sale Nov. 6 and start shipping the week of Nov. 16.
The research firm eMarketer estimates about 58 million people in the U.S. use an Amazon Echo while 26.5 million use a Google Nest speaker. Roughtly 15 million use a HomePod or speakers sold by other manufactures, including Sonos and Harman Kardon.

Anti-Migrant Sentiment Fanned on Facebook in Malaysia

As coronavirus infections surged in Malaysia this year, a wave of hate speech and misinformation aimed at Rohingya Muslim refugees from Myanmar began appearing on Facebook.   Alarmed rights groups reported the material to Facebook. But six months later, many posts targeting the Rohingya in Malaysia remain on the platform, including pages such as “Anti Rohingya Club” and “Foreigners Mar Malaysia’s Image,” although those two pages were removed after Reuters flagged them to Facebook recently.   Comments still online in one private group with nearly 100,000 members included “Hope they all die, this cursed pig ethnic group.”   Facebook acknowledged in 2018 that its platform was used to incite violence against the Rohingya in Myanmar, and last year spent more than $3.7 billion on safety and security on its platform. But the surge of anti-Rohingya comment in Malaysia shows how xenophobic speech nonetheless persists.   “Assertions that Facebook is uncommitted to addressing safety and security are inaccurate and do not reflect the significant investment we’ve made to address harmful content on our platform,” a company spokeswoman told Reuters.   Reuters found more than three dozen pages and groups, including accounts run by former and serving Malaysian security officials, that featured discriminatory language about Rohingya refugees and undocumented migrants.   Dozens of comments encouraged violence.   Reuters found some of the strongest comments in closed private groups, which people have to ask to join. Such groups have been a hotbed for hate speech and misinformation in other parts of the world. Facebook removed 12 of the 36 pages and groups flagged by Reuters, and several posts. Five other pages with anti-migrant content seen by Reuters in the last month were removed before Reuters queries.   “We do not allow people to post hate speech or threats of violence on Facebook and we will remove this content as soon as we become aware of it,” Facebook said. Some of the pages that remain online contain comments comparing Rohingya to dogs and parasites. Some disclosed where Rohingya had been spotted and encouraged authorities and the public to take action against them. Widespread hate speech   “This kind of hate speech can lead to physical violence and persecution of a whole group. We saw this in Myanmar,” said John Quinley, senior human rights specialist at Fortify Rights, an independent group focused on Southeast Asia.   “It would be irresponsible to not actively take down anti-refugee and anti-Rohingya Facebook groups and pages.”   Muslim-majority Malaysia was long friendly to the Rohingya, a minority fleeing persecution in largely Buddhist Myanmar, and more than 100,000 Rohingya refugees live in Malaysia, even though it doesn’t officially recognize them as refugees. But sentiment turned in April, with the Rohingya being accused of spreading the coronavirus. Hate speech circulated widely, including on Facebook – a platform used by nearly 70% of Malaysia’s 32 million people.   Rights groups and refugees said comments on Facebook helped escalate xenophobia in Malaysia.   “Malaysians who have lived with Rohingya refugees for years have started calling the cops on us, some have lost jobs. We are in fear all the time,” said Abu, a Rohingya refugee who did not want to give his full name fearing repercussions. Another refugee who declined to be identified said he deactivated his Facebook account after his details were posted and Malaysians messaged him telling him to go back to Myanmar – from where he fled five years ago. “Facebook has failed, they don’t understand how dangerous such comments can be,” he said, referring to posts he had seen supporting action in Myanmar against Rohingya.   ‘Absent’ Rights groups said the government of Prime Minister Muhyiddin Yassin had failed to do enough to curb xenophobia as it rounded up thousands of undocumented migrants and said it would no longer accept Rohingya refugees.   “The Malaysian government was completely absent from any sort of effort to try to curtail this wave of hate speech,” said Human Rights Watch deputy Asia director Phil Robertson. Muhyiddin’s office did not respond to requests for comment.   Reuters found four pages with links to security and enforcement agencies voicing anti-immigrant sentiment. “Let us not suffer the cancer of this ethnic (group),” administrators of a group called “Friends of Immigration” posted. The group says it is run by current and former immigration officials. That post from April was removed this month after Reuters queries to Facebook. The immigration department did not respond to Reuters queries. The communications and home ministries also did not respond to queries on hate speech in social media.   Among the earliest posts to draw comments calling for Rohingya to be shot was one from the Malaysian Armed Forces Headquarters asking the public to be its “ears and eyes” and report undocumented migrants. A military spokesman confirmed the authenticity of the page.   Another post that was shared more than 26,000 times was from a page calling itself the Military Royal Intelligence Corps that said undocumented migrants “will bring problems to all of us.”   Reuters was unable to contact the administrator of the page. The military said it had nothing to do with the page and it was run by a former member of the intelligence unit.   Facebook removed both posts after Reuters queries. The Intelligence Corps page was also taken down. 

App Allowing Chinese Citizens Access to Global Internet Quickly Disappears

A mobile app launched last week in China that many there hoped would allow access to long banned Western social media sites abruptly disappeared from Chinese app stores a day after its unveiling.Tuber, an Andriod app backed by Chinese cyber security software giant Qihoo 360, first appeared to be officially available last Friday. It offered Chinese citizens limited access to websites such as YouTube, Facebook and Google, and it facilitated some 5 million downloads following its debut.Yet a day later, the Tuber app disappeared from mobile app stores, including one run by Huawei Technologies Co. A search for the app’s website yielded no results when VOA checked Monday. It’s unclear whether the government ordered the takedown of the app.Experts told VOA that such ventures are sometimes designed to create the illusion of choice to users eager to gain access to the global internet, but these circumvention tools are sometimes deleted if they are deemed by the Chinese government to be too popular with consumers.FILE PHOTO: The messenger app WeChat is seen next to its logo in this illustration picture taken Aug. 7, 2020.Short-lived frenzyChinese users hailed their newfound ability to visit long banned websites before the app was removed last Saturday.Several now banned articles introducing Tuber went viral Friday on China’s super app WeChat and seem to have contributed to Tuber’s overnight success.Sporting a logo similar to that of YouTube, Tuber’s main page offered a feed of YouTube videos, while another tab allowed users go to Western websites banned in China.A reporter at Chinese state media Global Times tweeted that the move is “good for China’s stability and it’s a great step for China’s opening up.”Exciting news!! #China launched a new web browser Tuber that can connect to FB, Twitter, Google, etc, without using VPN!! It’s still censoring fake news or propaganda like Epoch Times, but I think it’s good for China’s stability and it’s a great step for China’s opening up! pic.twitter.com/03fyJAo6U8— Rita Bai Yunyi (@RitaBai) October 9, 2020Users noticed, though, that the browser came with its own censorship already included. References to sensitive political issues, such as the 1989 Tiananmen Square crackdown and the more recent Hong Kong protests, were omitted, according to a Reuters check. YouTube queries for politically sensitive keywords such as “Tiananmen” and “Xi Jinping” returned no results on the app, according to TechCrunch.Some terms in the users’ agreement also raised concerns among observers. According to the app’s terms of service, the platform could suspend users’ accounts and share their data “with the relevant authorities” if they “actively browse or disseminate” content that breaches the constitution, endangers national security and sovereignty, spreads rumors, disrupts social orders or violates other local laws.Additionally, the terms of service stated the collection of personal information about users related to national security, public safety and public health does not require user authorization.Meanwhile, users of the app had to register through a Chinese phone number, which is tied to a person’s real identity, and allows GPS location tracking.Since its removal last Saturday, those who downloaded the app received a message that Tuber is “undergoing a system upgrade,” according to TechCrunch.Not the first attemptSarah Cook, a senior research analyst for China, Hong Kong and Taiwan at Freedom House, a watchdog organization, told VOA the brief availability of the new app might be a way for the Chinese government to create the “illusion of choice” to users who want to use the global internet, especially for communications that are not sensitive.“By facilitating and controlling the access, the Chinese Communist Party is able to ensure that their browsing indeed stays within approved limits,” she said.Cook added that by contrast, when a Chinese internet user jumps the Great Firewall with an independent VPN, then even if they were looking for entertainment content, they are likely to come across more politically sensitive information.Tuber is not the first browser in China that attempted to provide Chinese citizens with some access to the Western internet, although few have drawn as much attention.About a year ago, there was a similar effort made with a mobile browser called Kuniao that was approved by China’s Ministry of Industry and Information Technology. It purported to allow users to bypass internet censorship, though critics also suggested that it simply reduced the scope of censorship, rather than allowing people to fully circumvent controls.“But within two days of its launch, Kuniao’s website crashed from the high demand and it was soon blocked entirely. The official position on it seemed to sour quickly and online references to the browser were also deleted,” Cook said.A Chinese blogger who has been following China’s Great Fire Wall and who requested anonymity for fear of government retaliation told VOA the latest moves are telling. The blogger said the fate of both Tuber and Kuniao shows the government is increasingly unable to control sophisticated circumvention tools, including commercially available VPN (virtual private network) services and tools developed by tech-savvy amateurs.“The government has actually allowed a considerable number of these web browsers,” the blogger said. “It helps the government to achieve some level of monitoring over these Internet users compared to those who use VPN services.“These browsers are remarkably reliable when used within limited groups. But when they’ve become overly popular, the government will inevitably intervene,” the blogger said, adding it wouldn’t be surprising to see similar circumvention tools coming out soon. 

Microsoft Attempts Takedown of Global Criminal Botnet

Microsoft announced legal action Monday seeking to disrupt a major cybercrime digital network that uses more than 1 million zombie computers to loot bank accounts and spread ransomware, which experts consider a major threat to the U.S. presidential election. The operation to knock offline command-and-control servers for a global botnet that uses an infrastructure known as Trickbot to infect computers with malware was initiated with an order that Microsoft obtained in Virginia federal court on Oct. 6.  Microsoft argued that the crime network is abusing its trademark. “It is very hard to tell how effective it will be, but we are confident it will have a very long-lasting effect,” said Jean-Ian Boutin, head of threat research at ESET, one of several cybersecurity firms that partnered with Microsoft to map the command-and-control servers. “We’re sure that they are going to notice and it will be hard for them to get back to the state that the botnet was in.” Cybersecurity experts said that Microsoft’s use of a U.S. court order to persuade internet providers to take down the botnet servers is laudable. But they add that it’s not apt to be successful because too many won’t comply and because Trickbot’s operators have a decentralized fall-back system and employ encrypted routing. Paul Vixie of Farsight Security said via email “experience tells me it won’t scale — there are too many IP’s behind uncooperative national borders.” And the cybersecurity firm Intel 471 reported no significant hit on Trickbot operations Monday and predicted “little medium- to long-term impact” in a report shared with The Associated Press.  But ransomware expert Brett Callow of the cybersecurity firm Emsisoft said that a temporary Trickbot disruption could, at least during the election, limit attacks and prevent the activation of ransomware on systems already infected.  The announcement follows a Washington Post report Friday of a major — but ultimately unsuccessful — effort by the U.S. military’s Cyber Command to dismantle Trickbot beginning last month with direct attacks rather than asking providers to deny hosting to domains used by command-and-control servers.  A U.S. policy called “persistent engagement” authorizes U.S. cyberwarriors to engage hostile hackers in cyberspace and disrupt their operations with code, something Cybercom did against Russian misinformation jockeys during U.S. midterm elections in 2018. Created in 2016 and used by a loose consortium of Russian-speaking cybercriminals, Trickbot is a digital superstructure for sowing malware in the computers of unwitting individuals and websites. In recent months, its operators have been increasingly renting it out to other criminals who have used it to sow ransomware, which encrypts data on target networks, crippling them until the victims pay up. One of the biggest reported victims of a ransomware variety sowed by Trickbot called Ryuk was the hospital chain Universal Health Services, which said all 250 of its U.S. facilities were hobbled in an attack last month that forced doctors and nurses to resort to paper and pencil.  U.S. Department of Homeland Security officials list ransomware as a major threat to the Nov. 3 presidential election. They fear an attack could freeze up state or local voter registration systems, disrupting voting, or knock out result-reporting websites.  While cybersecurity experts say the operators of Trickbot and affiliated digital crime syndicates are Russian speakers mostly based in eastern Europe, they caution that they are motivated by profit, not politics. They do, however, operate with impunity with no Kremlin interference as long as their targets are abroad.  “In today’s world, Trickbot is a type of a plague,” said Alex Holden, founder of Milwaukee-based Hold Security, which tracks its activity closely on the dark web, “and a government that ignores a global plague is more than complacent.” Trickbot is “malware-as-a-service,” its modular architecture lets it be used as a delivery mechanism for a wide array of criminal activity. It began mostly as a so-called banking Trojan that attempts to steal credentials from online bank account so criminals can fraudulently transfer cash. But recently, researchers have noted a rise in Trickbot’s use in ransomware attacks targeting everything from municipal and state governments to school districts and hospitals. Ryuk and another type of ransomware called Conti — also distributed via Trickbot — dominated attacks on the U.S. public sector in September, said Callow of Emsisoft.  Holden said the reported Cybercom disruption — involving efforts to confuse its configuration through code injections — succeeded in temporarily breaking down communications between command-and-control servers and most of the bots. “But that’s hardly a decisive victory,” he said, adding that the botnet rebounded with new victims and ransomware. The disruption — in two waves that began Sept. 22 — was first reported by cybersecurity journalist Brian Krebs. The AP could not immediately confirm the reported Cybercom involvement. 

Facebook to Ban Content that Denies, Distorts Holocaust

Facebook announced Monday that it is updating its hate speech policy and will ban all posts that deny or distort the Jewish Holocaust.Today we’re updating our hate speech policy to ban Holocaust denial.

We’ve long taken down posts that praise hate…Posted by Mark Zuckerberg on Monday, October 12, 2020“We’ve long taken down posts that praise hate crimes or mass murder, including the Holocaust,” Chief Executive Mark Zuckerberg said in a Facebook post. Zuckerberg said that with rising anti-Semitism, the company was expanding its policy to prohibit such content. He added, “If people search for the Holocaust on Facebook, we’ll start directing you to authoritative sources to get accurate information.”The announcement follows a #NoDenyingIt campaign by the Conference on Jewish Material Claims Against Germany.The non-profit Anti-Defamation League said on Facebook that it was pleased the social media giant “has finally taken the step we have been asking for nearly a decade: Remove Holocaust denial from their platform.” The ADL also said, “They now need to be transparent and document the steps being taken to keep this hate off the platform.”World Jewish Congress President Ronald S. Lauder also welcomed the move, saying that by taking this “critical step,” Facebook is showing it recognizes Holocaust denial for what it truly is – a form of anti-Semitism “and therefore hate speech.”The American Jewish Committee made similar comments, with its CEO, David Harris, calling the decision “profoundly significant.” He said, “There shouldn’t be a sliver of doubt about what the Nazi German regime did, nor should such a mega-platform as Facebook be used by antisemites to peddle their grotesque manipulation of history.”An estimated six million Jews died in the Holocaust.Zuckerberg, who is Jewish, came under fire in 2018 for saying in an interview that while he found Holocaust-denying content deeply offensive, he did not think it should be deleted.“I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust. My own thinking has evolved as I’ve seen data showing an increase in anti-Semitic violence, as have our wider policies on hate speech,” Zuckerberg wrote Monday.The move is the latest in a series of measures taken by Facebook to delete or ban offensive or false information, particularly ahead of the November 3 presidential election in the United States.

Chinese 5G Not Living Up to Its Hype

Mounted on rooftops, utility poles and streetlights throughout China since last year are hundreds of thousands of high-tech wireless towers for 5G, a powerful sign of the country’s ambition to lead in new technology. Yet many of them are operational for only half the day.China Unicom, one of three telecommunication operators, announced in August that its Luoyang branch in Henan province would automatically switch its 5G transmitter stations to sleep mode from 9 p.m. to 9 a.m. because there were few people using them. The other two carriers quickly followed suit and since then have rolled out the same policies in other cities across the country.”Shutting down base stations is not a manual shutdown, but an automatic adjustment made at a certain time,” Wang Xiaochu, chairman of China Unicom, said at the company’s midyear earnings conference.5G is one of the biggest technology investments in China’s recent history. Touted as the next big leap forward in digital communication, the 5th generation mobile network technology is supposed to change the world and spur a new digital revolution.China officially launched its commercial 5G networks in September 2019 with the promise of delivering unprecedented digital speed to support new applications from autonomous driving to virtual surgery. More than a year later, the biggest 5G market is now facing widespread complaints about network speed and skyrocketing costs of deployments.Signals are hitting wallsTo handle more data at higher speeds, 5G uses higher frequencies than current networks. However, the signals travel shorter distances and encounter more interference.”5G uses ultra-high frequency signals, which are about two to three times higher than the existing 4G signal frequency, so the signal coverage will be limited,” Wang Xiaofei, a communication expert at Tianjin University told Xinhua, the official state-run press agency, last year as the country’s state telecoms started to make 5G networks available to the public.Wang said since the coverage radius of its base station is only about 100 meters to 300 meters, China must build a station every 200 to 300 meters in urban areas. Because the penetration of 5G signals is so weak, even indoor stations will have to be built in densely distributed office buildings, residential areas, and commercial districts.And to reach the same coverage that 4G currently has, the carriers eventually need to install as many as 10 million stations across the country, according to a report by Xinhua.”For the next three years starting this year, 1 million 5G base stations may need to be built every year,” Xiang Ligang, director-general of the Information Consumption Alliance, a telecom industry association, told the state media last year.In the first half of this year, China only built 257,000 new 5G base stations. The total number of the stations installed across China so far was only about 410,000 by the end of June, according to the Ministry of Industry and Information Technology (MIIT).Big costs, small benefits?The cost of the energy needed to power 5G has proved to be one of the biggest headaches for Chinese telecommunication companies.”The 5G base station equipment consumes about three times more energy than 4G because of the way the technology works,” Soumya Sen, associate professor of information and decision sciences at the University of Minnesota, told VOA in an email.  “5G uses multiple antennas to make use of reflected signals from buildings to provide gains in channel robustness and throughput.”If 5G is to reach the same level of coverage as 4G networks, the base station’s annual electricity bill will approach $29 billion, according to a report by the China Post and Telecommunications News, a media outlet directly under MIIT. That amount represents about 10 times the 2019 profit of China Telecom, one of the three state-owned telecommunication companies in China.In the early days, there were efforts to make 5G more power-efficient than its predecessors, but the ambitions were quickly dashed as realities settled in.Two months after the official rollout of 5G services, a top executive from a Chinese carrier admitted that operators had made little progress in reducing 5G power consumption and cost. Speaking at a GSMA (Groupe Speciale Mobile Association) seminar in Beijing last week, Li Zhengmao, executive vice president of China Mobile called on the government to subsidize electricity costs for telecoms.”This might require government to support extended periods for subsidized monthly fees or subsidized handsets at the B2C [business to consumer] level, or tax breaks and other incentives,” said Ross Feingold, a lawyer and political risk analyst.The total investment could top $220 billion in the next few years, said Li Yizhong, former minister of Industry and Information Technology early this year during a forum.Another former official warned in a recent speech that China’s 5G push could become a failed investment.”The existing 5G technology is very immature, hundreds of billions of investment have been deployed, and the operating cost is extremely high, no application scenarios can be found, and it is difficult to digest the cost in the future,” former finance minister Lou Jiwei reportedly warned in a recent speech last month.”It is difficult for ordinary consumers and industry users to see the long-term benefits and rewards of 5G,” a white paper titled “The 2020 China 5G Economic Report” released by China Academy of Information and Communications Technology said.Based on a recent survey of Chinese consumers, 73.3% of the people polled said they believe that there is no need for the public to buy 5G mobile phones. The study released last month by iiMedia, a market research group, also found that the main reason for not buying 5G mobile phones is because there is no such need.With all the expectations and the investment, 5G is “actually exaggerated,” and it is not something that the societies need anyway, according to the man who leads a company that dominates the technology.”In fact, human societies do not have an urgent need for 5G,” said Huawei’s founder and CEO, Ren Zhengfei, “What people need now is broadband, and the main content of 5G is not broadband.”
 

Twitter Imposes Restrictions, More Warning Labels Ahead of US Election

Twitter Inc. said Friday that it would remove tweets calling for people to interfere with the U.S. election process or implementation of election results, including through violence, as the company also announced more restrictions to slow the spread of misinformation.Twitter said in a blog post that, from next week, users will get a prompt pointing them to credible information before they can retweet content that has been labeled as misleading.It said it would add more warnings and restrictions on tweets with misleading information labels from U.S. political figures like candidates and campaigns, as well as U.S.-based accounts with more than 100,000 followers or that get “significant engagement.”Twitter, which recently told Reuters it was testing how to make its labeling more obvious and direct, said people will have to tap through warnings to see these tweets. Users can also only “quote tweet” this content; likes, retweets and replies will be turned off.Twitter says it has labeled thousands of misleading posts, though most attention has been on the labels applied to tweets by U.S. President Donald Trump. Twitter also said it would label tweets that falsely claim a win for any candidate.Temporary stepsThe company announced several temporary steps to slow amplification of content. For example, from Oct. 20 to at least the end of the U.S. election week, global users pressing “retweet” will be directed first to the “quote tweet” button to encourage people to add their own commentary.It will also stop surfacing trending topics without added context and will stop people seeing “liked by” recommendations from people they do not know in their timeline.Twitter’s decision to hit the brakes on automated recommendations contrasts with the approach at Facebook Inc., which is amping up promotion of its groups product despite concerns about extremism in those spaces.Social media companies are under pressure to combat election-related misinformation and prepare for the possibility of violence or polling place intimidation around the Nov. 3 vote.Reuters has reported that Republicans are mobilizing thousands of volunteers to watch early voting sites and ballot drop boxes to try to find evidence to back up Trump’s unsubstantiated complaints about widespread voter fraud.On Wednesday, Facebook said it would ban calls for poll watching using “militarized language.”

Pakistan Blocks TikTok, Citing ‘Immoral’ Content

Pakistan has blocked online short-video sharing platform TikTok on the grounds of “immoral/indecent” content for viewing in the majority-Muslim nation.The state regulator said Friday that it had repeatedly instructed the platform to tighten its content monitoring to block access to the “unlawful” material.”However, the application failed to fully comply with the instructions, therefore, directions were issued for blocking of TikTok application in the country,” said the Pakistan Telecommunication Authority, PTA.The regulator defended the decision, saying the PTA, in a formal warning, had given “considerable time” to the online platform to respond and comply with the instructions.FILE – A man opens social media app TikTok on his cellphone, in Islamabad, Pakistan, July 21, 2020.”TikTok has been informed that the authority is open for engagement and will review its decision subject to a satisfactory mechanism by TikTok to moderate unlawful content,” according to the PTA.There was no immediate reaction from the popular online platform to the blocking of its service by Pakistani authorities.Amnesty International slammed the ban on TikTok, saying that in the name of a campaign against vulgarity, people are being denied the right to express themselves online.”The #TikTokBan comes against a backdrop where voices are muted on television, columns vanish from newspapers, websites are blocked and television ads banned,” Amnesty said in a statement posted on Twitter.TikTok, owned by China-based ByteDance, is also under pressure globally due to security and privacy concerns.Neighboring India has already blocked access to the social media outlet, along with dozens of other apps developed by Chinese companies, citing cybersecurity concerns.TikTok is also under scrutiny in other countries, including the United States, the biggest market by revenue for the company.Dating apps banLast month, Pakistan blocked access to five dating apps for their delivery of “immoral/indecent content” in violation of the country’s laws.The platforms include Tinder, Grindr, Tagged, Skout and SayHi.The PTA, without elaborating on the sweeping ban, said that all five companies had failed to respond to its directive within the stipulated time, though it did not specify the timeframe.Tinder is globally popular and owned by Match Group.Grindr, which has a large following in the U.S., describes itself as a social network “for gay, bi, trans, and queer people.”Homosexuality and extra-marital relationships are outlawed in Pakistan. 
 

Google, Oracle Meet in Copyright Clash at Supreme Court

Tech giants Google and Oracle are clashing at the Supreme Court in a copyright dispute that’s worth billions and important to the future of software development.
The case before the justices Wednesday has to do with Google’s creation of the Android operating system now used on the vast majority of smartphones worldwide. Google says that to create Android, which was released in 2007, it wrote millions of lines of new computer code. But it also used 11,330 lines of code and an organization that’s part of Oracle’s Java platform.
 
Google has defended its actions, saying what it did is long-settled, common practice in the industry, a practice that has been good for technical progress. But Oracle says Google “committed an egregious act of plagiarism” and sued, seeking more than $8 billion.
The case has been going on for a decade. Google won the first round when a trial court rejected Oracle’s copyright claim, but that ruling was overturned on appeal. A jury then sided with Google, calling its copying “fair use,” but an appeals court disagreed.
Because of the death of Justice Ruth Bader Ginsburg, only eight justices are hearing the case, and they’re doing so by phone because of the coronavirus pandemic. The questions for the court are whether the 1976 Copyright Act protects what Google copied, and, even if it does, whether what Google did is still permitted.  
Oracle, for its part, says the case is simple.  
“This case is about theft,” Oracle’s chief Washington lobbyist, Ken Glueck, said in a telephone interview ahead of argument. He compared what Google did to plagiarizing from someone else’s speech. When you plagiarize one line from a speech, he said: “That’s a plagiarized speech. Nobody says, ‘Oh, well, it was just one line.'”
But Google’s Kent Walker, the company’s chief legal officer, said in an interview that Google wrote “every line of code we possibly could ourselves.”
“No one’s ever claimed copyright over software interfaces, but that’s what Oracle is claiming now,” Walker said.
Microsoft, IBM and major internet and tech industry lobbying groups have weighed in — in favor of Google.
The Trump administration, the Motion Picture Association and the Recording Industry Association of America are among those supporting Oracle.

Facebook Says It Will Ban Groups for ‘Representing’ QAnon

Facebook said it will ban groups that “represent” QAnon, the baseless conspiracy theory that paints President Donald Trump as a secret warrior against a supposed child-trafficking ring run by celebrities and “deep state” government officials. The company said Tuesday that it will remove Facebook pages, groups and Instagram accounts for “representing QAnon,” even if they don’t promote violence. The social network said it will consider a variety of factors to decide if a group meets its criteria for a ban, including its name, the biography or “about” section of the page, and discussions within the page, group or Instagram account. Mentions of QAnon in a group focused on a different subject won’t necessarily lead to a ban, Facebook said. Less than two months ago, Facebook said it would stop promoting the group and its adherents, although it faltered with spotty enforcement. It said it would only remove QAnon groups if they promote violence. That is no longer the case. The company said it started to enforce the policy Tuesday but cautioned that it “will take time and will continue in the coming days and weeks.” The QAnon phenomenon has sprawled across a patchwork of secret Facebook groups, Twitter accounts and YouTube videos in recent years. QAnon has been linked to real-world violence such as criminal reports of kidnapping and dangerous claims that the coronavirus is a hoax. But the conspiracy theory has also seeped into mainstream politics. Several Republicans running for Congress this year are QAnon-friendly. By the time Facebook and other social media companies began enforcing — however limited — policies against QAnon, critics said it was largely too late. Reddit, which began banning QAnon groups in 2018, was well ahead, and to date it has largely avoided having a notable QAnon presence on its platform. Twitter did not immediately respond to a message for comment on Tuesday. Also on Tuesday, Citigroup Inc. reportedly fired a manager in its technology department after an investigation found that he operated a prominent website dedicated to QAnon. According to Bloomberg, Jason Gelinas had been placed on paid leave after he was identified on Sept. 10 by a fact-checking site as the operator of the website QMap.pub and its associated mobile apps. Citi did not immediately respond to a message for comment on Tuesday. 

US Congressional Panel Finds Big Tech Abuses Power, Recommends Changes

A U.S. House of Representatives panel looking into abuses of market power by four of the biggest technology companies found they used “killer acquisitions” to smite rivals, charge exorbitant fees and force small businesses into “oppressive” contracts in the name of profit. The panel, an antitrust subcommittee of the Judiciary Committee, recommended that Alphabet Inc.’s Google, Apple Inc., Amazon.com and Facebook should not both control and compete in related business activities but stopped short of saying they should be broken up. The scathing 449-page report describes dozens of instances where the companies misused their power, revealing corporate cultures apparently bent on doing what they could to maintain dominance over large portions of the internet. “To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons,” the report said. Facebook, Apple and Google did not have an immediate comment. In anticipation of the report, Amazon warned in a blog post Tuesday against “fringe notions of antitrust” and market interventions that “would kill off independent retailers and punish consumers by forcing small businesses out of popular online stores, raising prices and reducing consumer choice.” After more than a year of investigation involving 1.3 million documents and more than 300 interviews, the committee led by Democratic Congressman David Cicilline found companies were running marketplaces where they also competed, creating “a position that enables them to write one set of rules for others, while they play by another.” Coming just weeks before the Nov. 3 presidential election, the content of the report became increasingly political, an opportunity for Republicans and Democrats to boost their credibility in the fight against market domination by big tech companies. That said, Congress is unlikely to act on the findings this year. Ultimately, the report reflects the views of the Democratic majority in the House, and two other reports were expected to be authored by Republican members on the panel, two sources told Reuters earlier in the day. Recommendations The panel recommended companies be prohibited from operating in closely aligned businesses. While they did not name any one company, this recommendation would suggest that Google, which runs the auctions for online ad space and participates in those auctions, should potentially be required to separate clearly, or not even operate, the two businesses. The report urged Congress to allow antitrust enforcers more leeway in stopping companies from purchasing potential rivals, something that is now difficult. Facebook’s acquisition of Instagram in 2012 is an example of this. Instagram at the time was small and insignificant, but Facebook CEO Mark Zuckerberg saw its potential and noted that it was “building networks that are competitive with our own” and “could be very disruptive to us,” the report said. As part of the report, the committee staff drew up a menu of potential changes in antitrust law. The suggestions ranged from the aggressive, such as potentially barring companies like Amazon.com from operating the markets in which it also competes, to the less controversial, like increasing the budgets of the agencies that enforce antitrust law: the Justice Department’s Antitrust Division and the Federal Trade Commission. 

COVID-19 Stokes Demand for Temperature Check Technologies

Even though businesses are reopening around the world, the pandemic is still a reality. Many commercial spaces and offices are taking people’s temperatures before allowing them inside.  In some industries, handheld thermometers may not be efficient enough. Thermal imaging systems allow temperatures to be taken without anyone needing to be physically close to the person being evaluated.  The demand for these types of devices is skyrocketing globally. VOA’s Elizabeth Lee has the details.Videographers: Michael Eckels, Elizabeth Lee  Video editor: Elizabeth Lee

US States Roll Out Apps Alerting People to COVID-19 Exposure

More than six months into the COVID-19 pandemic, a handful of U.S. states are starting to roll out apps that promise to tell people if they’ve been exposed to someone with the virus — without revealing personal information.  Now with the White House struggling with a COVID-19 outbreak, the goal to figure out a way to quickly notify people has gained more urgency.  The arrival of these apps in the U.S. comes as communities are opening in fits and starts. The hope is that by using technology to notify people they’ve been exposed to the virus, the apps will enhance the ability of local health officials to stem the spread of COVID-19.It’s an idea being tested — in real time.  But will the apps make a difference?“We don’t know yet,” said Jeffrey Kahn, director of the Johns Hopkins Berman Institute of Bioethics. “That’s part of what’s both interesting and frustrating about where we are. This is an unproven technology. It’s being rolled out in the midst of a public health emergency. There’s a lot of learning as we go as a result.”Notifying people, anonymouslyWhile state apps vary, the primary approach being used in the U.S. is based on technology from Apple and Google:A person downloads an app created by their state health department. Using the person’s mobile phone technology, the app begins collecting anonymized information about other phones it comes near — which phones, how close and for how long. That information of the “digital handshake” is stored on the person’s phone.If a person tests positive for COVID-19, health officials give that person a code to put into the app. An alert then goes out to others who have the app who have been near that person in the prior two weeks.Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 11 MB480p | 16 MB540p | 20 MB720p | 39 MB1080p | 90 MBOriginal | 285 MB Embed” />Copy Download AudioTwo approachesCovid apps for mobile phones first appeared in Asia, in China and South Korea. There, officials used a phone’s location information to track people.It’s an approach being used in other parts of the world. In Israel, the government is scouring people’s mobile phone records to locate those who’ve been near someone who has tested positive in order to possibly quarantine those people. In Turkey, a person’s mobile phone software tracks their movements and who they’ve been near.But approaches that use a phone’s location information raise privacy questions, said Megan DeBlois, a systems security graduate student who helped to create the COVID-19 App Tracker, a website that keeps track of Covid apps around the world. “There are too many apps that request far too much,” she said.Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 9 MB480p | 12 MB540p | 16 MB720p | 32 MB1080p | 68 MBOriginal | 203 MB Embed” />Copy Download AudioAnonymous usersU.S. states are creating their own apps, based on the approach offered by Apple and Google, which made it a condition of using their technology that the COVID-19 apps couldn’t use mobile phone location data.That privacy requirement helps build people’s trust in the apps, said Sarah Kreps, a government professor at Cornell University who is studying COVID-19, technology and public sentiment. Knowing someone who has been infected by the coronavirus that causes COVID-19 also makes people more willing to try a COVID-19 app, she said.“In order for these apps to be effective, you need to have enough of a critical mass of people who are willing to download and use the app,” she said. “And short of mandating that, as was done in China, then you need a kind of public trust.”So far in Virginia and other states with COVID-19 apps, people recently interviewed appeared open to using the apps.“I’m trying to be personally conscious, responsible, for what I should be doing,” said Mike, who was recently on a bike path in Northern Virginia. “This was billed as something you can trust, and I accept it.”“I read up on it and honestly I feel pretty good about it,” said Hayes, a graduate student at University of Arizona who planned to download the CovidWatch app. “They’ve done a lot of stuff to avoid privacy issues. I think it sounds pretty legit.”Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 9 MB480p | 12 MB540p | 15 MB720p | 30 MB1080p | 57 MBOriginal | 297 MB Embed” />Copy Download AudioLimits of privacyBut anonymous COVID-19 apps come with a trade-off: They limit the app’s usefulness to public health officials. If a person’s identity and location aren’t known, the app gives scarce information about an ongoing outbreak.   Joyce Schroeder heads the molecular and cellular biology department at the University of Arizona and has been the lead in developing CovidWatch, an Arizona-based app that doesn’t collect individuals’ private information.That’s “a good thing,” she said. “We want to have our privacy. But it’s also a frustrating thing when you’re trying to collect data on something and find out if it’s working. There’s very little data that we can collect on the app.”Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 11 MB480p | 16 MB540p | 19 MB720p | 39 MB1080p | 74 MBOriginal | 259 MB Embed” />Copy Download AudioStates working togetherOutside the U.S., countries’ health departments have been issuing nationwide apps. In the U.S., the federal government isn’t doing its own app so states have contracted with app developers to create their own.So far, nine states have issued COVID-19 notification apps based on the Apple-Google technology with more states working on their own, according to a review by 9to5Mac. In its latest software update, Apple installed something called Exposure Notification on mobile devices so that states can more easily start notifying people if they’ve been exposed. Users can turn it on or off. Google is expected to issue the same Android update soon.Working with Microsoft, the Association of Public Health Laboratories recently launched a “national key server,” which will make it possible to use an app from one state while visiting other states.While it’s too early to say, these efforts to use technology may make a difference in the fight against Covid, said Johns Hopkins’ Kahn.“It’s an opportunity,” he said, “to help steer the positive use of a technology during what are obviously very challenging times.”Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 8 MB480p | 12 MB540p | 16 MB720p | 36 MB1080p | 65 MBOriginal | 76 MB Embed” />Copy Download Audio

COVID  Apps Roll Out Nationwide as States Try to Reopen

More than six months into the COVID-19 pandemic, a handful of U.S. states are starting to roll out apps that promise to tell people if they’ve been exposed to someone with the virus — without revealing personal information.  Now with the White House struggling with a COVID-19 outbreak, the goal to figure out a way to quickly notify people has gained more urgency.The arrival of these apps in the U.S. comes as communities are opening in fits and starts. The hope is that by using technology to notify people they’ve been exposed to the virus, the apps will enhance the ability of local health officials to stem the spread of COVID-19.It’s an idea being tested — in real time.  But will the apps make a difference?“We don’t know yet,” said Jeffrey Kahn, director of the Johns Hopkins Berman Institute of Bioethics. “That’s part of what’s both interesting and frustrating about where we are. This is an unproven technology. It’s being rolled out in the midst of a public health emergency. There’s a lot of learning as we go as a result.”Notifying people, anonymouslyWhile state apps vary, the primary approach being used in the U.S. is based on technology from Apple and Google:A person downloads an app created by their state health department. Using the person’s mobile phone technology, the app begins collecting anonymized information about other phones it comes near — which phones, how close and for how long. That information of the “digital handshake” is stored on the person’s phone.If a person tests positive for COVID-19, health officials give that person a code to put into the app. An alert then goes out to others who have the app who have been near that person in the prior two weeks.Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 11 MB480p | 16 MB540p | 20 MB720p | 39 MB1080p | 90 MBOriginal | 285 MB Embed” />Copy Download AudioTwo approachesCovid apps for mobile phones first appeared in Asia, in China and South Korea. There, officials used a phone’s location information to track people.It’s an approach being used in other parts of the world. In Israel, the government is scouring people’s mobile phone records to locate those who’ve been near someone who has tested positive in order to possibly quarantine those people. In Turkey, a person’s mobile phone software tracks their movements and who they’ve been near.But approaches that use a phone’s location information raise privacy questions, said Megan DeBlois, a systems security graduate student who helped to create the COVID-19 App Tracker, a website that keeps track of Covid apps around the world. “There are too many apps that request far too much,” she said.Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 9 MB480p | 12 MB540p | 16 MB720p | 32 MB1080p | 68 MBOriginal | 203 MB Embed” />Copy Download AudioAnonymous usersU.S. states are creating their own apps, based on the approach offered by Apple and Google, which made it a condition of using their technology that the COVID-19 apps couldn’t use mobile phone location data.That privacy requirement helps build people’s trust in the apps, said Sarah Kreps, a government professor at Cornell University who is studying COVID-19, technology and public sentiment. Knowing someone who has been infected by the coronavirus that causes COVID-19 also makes people more willing to try a COVID-19 app, she said.“In order for these apps to be effective, you need to have enough of a critical mass of people who are willing to download and use the app,” she said. “And short of mandating that, as was done in China, then you need a kind of public trust.”So far in Virginia and other states with COVID-19 apps, people recently interviewed appeared open to using the apps.“I’m trying to be personally conscious, responsible, for what I should be doing,” said Mike, who was recently on a bike path in Northern Virginia. “This was billed as something you can trust, and I accept it.”“I read up on it and honestly I feel pretty good about it,” said Hayes, a graduate student at University of Arizona who planned to download the CovidWatch app. “They’ve done a lot of stuff to avoid privacy issues. I think it sounds pretty legit.”Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 9 MB480p | 12 MB540p | 15 MB720p | 30 MB1080p | 57 MBOriginal | 297 MB Embed” />Copy Download AudioLimits of privacyBut anonymous COVID-19 apps come with a trade-off: They limit the app’s usefulness to public health officials. If a person’s identity and location aren’t known, the app gives scarce information about an ongoing outbreak.   Joyce Schroeder heads the molecular and cellular biology department at the University of Arizona and has been the lead in developing CovidWatch, an Arizona-based app that doesn’t collect individuals’ private information.That’s “a good thing,” she said. “We want to have our privacy. But it’s also a frustrating thing when you’re trying to collect data on something and find out if it’s working. There’s very little data that we can collect on the app.”States working togetherOutside the U.S., countries’ health departments have been issuing nationwide apps. In the U.S., the federal government isn’t doing its own app so states have contracted with app developers to create their own.So far, nine states have issued COVID-19 notification apps based on the Apple-Google technology with more states working on their own, according to a review by 9to5Mac. In its latest software update, Apple installed something called Exposure Notification on mobile devices so that states can more easily start notifying people if they’ve been exposed. Users can turn it on or off. Google is expected to issue the same Android update soon.Working with Microsoft, the Association of Public Health Laboratories recently launched a “national key server,” which will make it possible to use an app from one state while visiting other states.While it’s too early to say, these efforts to use technology may make a difference in the fight against Covid, said Johns Hopkins’ Kahn.“It’s an opportunity,” he said, “to help steer the positive use of a technology during what are obviously very challenging times.”Sorry, but your browser cannot support embedded video of this type, you can
download this video to view it offline.Download File360p | 8 MB480p | 12 MB540p | 16 MB720p | 36 MB1080p | 65 MBOriginal | 76 MB Embed” />Copy Download Audio

US States Turn to Apps in Fight Against Virus Spread

With tens of thousands of new coronavirus cases daily in the U.S., states are launching digital apps that alert people if they have been exposed to someone who tested positive for the virus. Virginia recently rolled out a COVID exposure app that became instantly popular with residents. Health officials are trying to determine whether such apps will work to help slow virus transmission. VOA’s Julie Taboh has more.Producers: Julie Taboh, Adam Greenbaum. Videographers: Adam Greenbaum, VPM, Skype, VDH.

Amazon Launches Trial of Pay-by-Palm Device

Need to pay for some groceries? No problem, just wave your palm. That could be the new mode of payment at Amazon Go stores if current trials of its new technology in Seattle, Washington, are successful.   The technology, known as Amazon One, is a “free, contactless service that lets you use your palm to pay, enter or identify yourself,” according to its website. The product, which is undergoing trials at two Go stores in Seattle, will allow customers to enter their credit card details and cell phone number and scan their palm or palms for distinct details such as “surface area, lines and ridges as well as subcutaneous features such as vein patterns” on a biometric device. The individual palm details are then used to create a customer’s unique palm signature, and Amazon is counting on that to protect customer information. The e-commerce company assures customers that the Amazon One device does not store information. “We treat your palm signature just like other highly sensitive personal data and keep it safe using best-in-class technical and physical security controls,” according to the website.  Once sign-up is complete, customers can purchase goods and services with their palm prints by hovering over the payment device. It will also allow customers to use their palms as a form of ID, which allows them to enter Go stores without a code. If customers change their minds about using the service, Amazon says it will completely delete their information. “Amazon will permanently delete your palm signature from Amazon’s systems after completion of any remaining transactions,” the website says. “Your Amazon One ID will also be automatically deleted if you do not interact with an Amazon One device for two years.” Amazon says it hopes to replicate the technology in all of its Go stores after its pilot use in Seattle and that it looks forward to other retailers signing up for the service.  

TikTok Launches US Election Guide

Chinese-owned video sharing platform TikTok says it is creating a guide “to protect against misinformation” during the 2020 U.S. elections. In a blog post Tuesday, the company said its guide would connect “100 million Americans with trusted information about the elections from the National Association of Secretaries of State, BallotReady, SignVote, and more.” “Our The U.S. head office of TikTok is seen in Culver City, California, Sept. 15, 2020.TikTok, which is especially popular with younger people, is owned by ByteDance, a Chinese company. TikTok has sought to alleviate U.S. concerns over privacy issues by forming a partnership with two U.S. companies, Oracle and WalMart. The deal has not been finalized, and there have been conflicting statements among the parties about how much of the new venture each company would own.  The Trump administration was moving forward to ban TikTok from app stores, but on Sunday, a judge blocked an order to prevent app stores from distributing it.  The judge gave lawyers for TikTok and the administration until Wednesday to meet and propose a schedule for further proceedings in the case.