One big concern about autonomous vehicles is that logical computers sometimes have trouble dealing with a messy world. To the point, a pedestrian was struck and killed by an autonomous vehicle in Arizona last year. But new algorithms are trying to solve that potentially deadly problem. VOA’s Kevin Enochs reports.
…
All posts by MTechnology
Twitter Terror: Arrests Prompt Concern Over Online Extremism
A few months after he turned 17 — and more than two years before he was arrested — Vincent Vetromile recast himself as an online revolutionary.
Offline, in this suburb of Rochester, New York, Vetromile was finishing requirements for promotion to Eagle Scout in a troop that met at a local church. He enrolled at Monroe Community College, taking classes to become a heating and air conditioning technician. On weekends, he spent hours in the driveway with his father, a Navy veteran, working on cars.
On social media, though, the teenager spoke in world-worn tones about the need to “reclaim our nation at any cost.” Eventually he subbed out the grinning selfie in his Twitter profile, replacing it with the image of a colonial militiaman shouldering an AR-15 rifle. And he traded his name for a handle: “Standing on the Edge.”
That edge became apparent in Vetromile’s posts, including many interactions over the last two years with accounts that praised the Confederacy, warned of looming gun confiscation and declared Muslims to be a threat.
In 2016, he sent the first of more than 70 replies to tweets from a fiery account with 140,000 followers, run by a man billing himself as Donald Trump’s biggest Canadian supporter. The final exchange came late last year.
“Islamic Take Over Has Begun: Muslim No-Go Zones Are Springing Up Across America. Lock and load America!” the Canadian tweeted on December 12, with a video and a map highlighting nine states with Muslim enclaves.
“The places listed are too vague,” Vetromile replied. “If there were specific locations like ‘north of X street in the town of Y, in the state of Z’ we could go there and do something about it.”
Weeks later, police arrested Vetromile and three friends, charging them with plotting to attack a Muslim settlement in rural New York. And with extremism on the rise across the U.S., this town of neatly kept Cape Cods confronted difficult questions about ideology and young people — and technology’s role in bringing them together.
The reality of the plot Vetromile and his friends are charged with hatching is, in some ways, both less and more than what was feared when they were arrested in January.
Prosecutors say there is no indication that the four — Vetromile, 19; Brian Colaneri, 20; Andrew Crysel, 18; and a 16-year-old The Associated Press isn’t naming because of his age — had set an imminent or specific date for an attack. Reports they had an arsenal of 23 guns are misleading; the weapons belonged to parents or other relatives.
Prosecutors allege the four discussed using those guns, along with explosive devices investigators say were made by the 16-year-old, in an attack on the community of Islamberg.
Residents of the settlement in Delaware County, New York — mostly African-American Muslims who relocated from Brooklyn in the 1980s — have been harassed for years by right-wing activists who have called it a terrorist training camp. A Tennessee man, Robert Doggart , was convicted in 2017 of plotting to burn down Islamberg’s mosque and other buildings.
But there are few clues so far to explain how four with little experience beyond their high school years might have come up with the idea to attack the community. All have pleaded not guilty, and several defense attorneys, back in court Friday, are arguing there was no plan to actually carry out any attack, chalking it up to talk among buddies. Lawyers for the four did not return calls, and parents or other relatives declined interviews.
“I don’t know where the exposure came from, if they were exposed to it from other kids at school, through social media,” said Matthew Schwartz, the Monroe County assistant district attorney prosecuting the case. “I have no idea if their parents subscribe to any of these ideologies.”
Well beyond upstate New York, the spread of extremist ideology online has sparked growing concern. Google and Facebook executives went before the House Judiciary Committee this month to answer questions about their platforms’ role in feeding hate crime and white nationalism. Twitter announced new rules last fall prohibiting the use of “dehumanizing language” that risks “normalizing serious violence.”
But experts said the problem goes beyond language, pointing to algorithms used by search engines and social media platforms to prioritize content and spotlight likeminded accounts.
“Once you indicate an inclination, the machine learns,” said Jessie Daniels, a professor of sociology at New York’s Hunter College who studies the online contagion of alt-right ideology. “That’s exactly what’s happening on all these platforms … and it just sends some people down a terrible rabbit hole.”
She and others point to Dylann Roof, who in 2015 murdered nine worshippers at a historic black church in Charleston, South Carolina. In writings found afterward, Roof recalled how his interest in the shooting of black teenager Trayvon Martin had prompted a Google search for the term “black on white crime.” The first site the search engine pointed him to was run by a racist group promoting the idea that such crime is common, and as he learned more, Roof wrote, that eventually drove his decision to attack the congregation.
In the Rochester-area case, electronic messages between two of those arrested, seen by the AP, along with papers filed in the case suggest doubts divided the group.
“I honestly see him being a terrorist,” one of those arrested, Crysel, told his friend Colaneri in an exchange last December on Discord, a messaging platform popular with gamers that has also gained notoriety for its embrace by some followers of the alt-right.
“He also has a very odd obsession with pipe bombs,” Colaneri replied. “Like it’s borderline creepy.”
It is not clear from the message fragment seen which of the others they were referencing. What is clear, though, is the long thread of frustration in Vetromile’s online posts — and the way those posts link him to an enduring conspiracy theory.
A few years ago, Vetromile’s posts on Twitter and Instagram touched on subjects like video games and English class.
He made the honor roll as an 11th-grader but sometime thereafter was suspended and never returned, according to former classmates and others. The school district, citing federal law on student records, declined to provide details.
Ron Gerth, who lives across the street from the family, recalled Vetromile as a boy roaming the neighborhood with a friend, pitching residents on a leaf-raking service: “Just a normal, everyday kid wanting to make some money, and he figured a way to do it.” More recently, Gerth said, Vetromile seemed shy and withdrawn, never uttering more than a word or two if greeted on the street.
Vetromile and suspect Andrew Crysel earned the rank of Eagle in Boy Scout Troop 240, where the 16-year-old was also a member. None ever warranted concern, said Steve Tyler, an adult leader.
“Every kid’s going to have their own sort of geekiness,” Tyler said, “but nothing that would ever be considered a trigger or a warning sign that would make us feel unsafe.”
Crysel and the fourth suspect, Colaneri, have been diagnosed with Asperger’s syndrome, a milder form of autism, their families have said. Friends described Colaneri as socially awkward and largely disinterested in politics. “He asked, if we’re going to build a wall around the Gulf of Mexico, how are people going to go to the beach?” said Rachael Lee, the aunt of Colaneri’s girlfriend.
Vetromile attended community college with Colaneri before dropping out in 2017. By then, he was fully engaged in online conversations about immigrants living in the U.S. illegally, gun rights and Trump. Over time, his statements became increasingly militant.
“We need a revolution now!” he tweeted in January, replying to a thread warning of a coming “war” over gun ownership.
Vetromile directed some of his strongest statements at Muslims. Tweets from the Canadian account, belonging to one Mike Allen, seemed to push that button.
In July 2017, Allen tweeted “Somali Muslims take over Tennessee town and force absolute HELL on terrified Christians.” Vetromile replied: ”@realDonaldTrump please do something about this!”
A few months later, Allen tweeted: “Czech politicians vote to let citizens carry guns, shoot Muslim terrorists on sight.” Vetromile’s response: “We need this here!”
Allen’s posts netted hundreds of replies a day, and there’s no sign he read Vetromile’s responses. But others did, including the young man’s reply to the December post about Muslim “no-go zones.”
That tweet included a video interview with Martin Mawyer, whose Christian Action Network made a 2009 documentary alleging that Islamberg and other settlements were terrorist training camps. Mawyer linked the settlements, which follow the teachings of a controversial Pakistani cleric, to a group called Jamaat al-Fuqra that drew scrutiny from law enforcement in the 1980s and 1990s. In 1993, Colorado prosecutors won convictions of four al-Fuqra members in a racketeering case that included charges of fraud, arson and murder.
Police and analysts have repeatedly said Islamberg does not threaten violence. Nevertheless, the allegations of Mawyer’s group continue to circulate widely online and in conservative media.
Replying to questions by email, Mawyer said his organization has used only legal means to try to shut down the operator of the settlements.
“Vigilante violence is always the wrong way to solve social or personal problems,” he said. “Christian Action Network had no role, whatsoever, in inciting any plots.”
Online, though, Vetromile reacted with consternation to the video of Mawyer: “But this video just says ‘upstate NY and California’ and that’s too big of an area to search for terrorists,” he wrote.
Other followers replied with suggestions. “Doesn’t the video state Red House, Virginia as the place?” one asked. Virginia was too far, Vetromile replied, particularly since the map with the tweet showed an enclave in his own state.
When another follower offered a suggestion, Vetromile signed off: “Eh worth a look. Thanks.”
The exchange ended without a word from the Canadian account, whose tweet started it.
Three months before the December exchange on Twitter, the four suspects started using a Discord channel dubbed ”#leaders-only” to discuss weapons and how they would use them in an attack, prosecutors allege. Vetromile set up the channel, one of the defense attorneys contends, but prosecutors say they don’t consider any one of the four a leader.
In November, the conversation expanded to a second channel: ”#militia-soldiers-wanted.”
At some point last fall the 16-year-old made a grenade — “on a whim to satisfy his own curiosity,” his lawyer said in a court filing that claims the teen never told the other suspects. That filing also contends the boy told Vetromile that forming a militia was “stupid.”
But other court records contradict those assertions. Another teen, who is not among the accused, told prosecutors that the 16-year-old showed him what looked like a pipe bomb last fall and then said that Vetromile had asked for prototypes. “Let me show you what Vinnie gave me,” the young suspect allegedly said during another conversation, before leaving the room and returning with black explosive powder.
In January, the 16-year-old was in the school cafeteria when he showed a photo to a classmate of one of his fellow suspects, wearing some kind of tactical vest. He made a comment like, “He looks like the next school shooter, doesn’t he?” according to Greece Police Chief Patrick Phelan. The other student reported the incident, and questioning by police led to the arrests and charges of conspiracy to commit terrorism.
The allegations have jarred a region where political differences are the norm. Rochester, roughly half white and half black and other minorities, votes heavily Democratic. Neighboring Greece, which is 87 percent white, leans conservative. Town officials went to the Supreme Court to win a 2014 ruling allowing them to start public meetings with a chaplain’s prayer.
The arrests dismayed Bob Lonsberry, a conservative talk radio host in Rochester, who said he checked Twitter to confirm Vetromile didn’t follow his feed. But looking at the accounts Vetromile did follow convinced him that politics on social media had crossed a dangerous line.
“The people up here, even the hillbillies like me, we would go down with our guns and stand outside the front gate of Islamberg to protect them,” Lonsberry said. “It’s an aberration. But … aberrations, like a cancer, pop up for a reason.”
Online, it can be hard to know what is true and who is real. Mike Allen, though, is no bot.
“He seems addicted to getting followers,” said Allen’s adult son, Chris, when told about the arrest of one of the thousands attuned to his father’s Twitter feed. Allen himself called back a few days later, leaving a brief message with no return number.
But a few weeks ago, Allen welcomed in a reporter who knocked on the door of his home, located less than an hour from the Peace Bridge linking upstate New York to Ontario, Canada.
“I really don’t believe in regulation of the free marketplace of ideas,” said Allen, a retired real estate executive, explaining his approach to social media. “If somebody wants to put bulls— on Facebook or Twitter, it’s no worse than me selling a bad hamburger, you know what I mean? Buyer beware.”
Sinking back in a white leather armchair, Allen, 69, talked about his longtime passion for politics. After a liver transplant stole much of his stamina a few years ago, he filled downtime by tweeting about subjects like interest rates.
When Trump announced his candidacy for president in 2015, in a speech memorable for labeling many Mexican immigrants as criminals, Allen said he was determined to help get the billionaire elected. He began posting voraciously, usually finding material on conservative blogs and Facebook feeds and crafting posts to stir reaction.
Soon his account was gaining up to 4,000 followers a week.
Allen said he had hoped to monetize his feed somehow. But suspicions that Twitter “shadow-banning” was capping gains in followers made him consider closing the account. That was before he was shown some of his tweets and the replies they drew from Vetromile — and told the 19-year-old was among the suspects charged with plotting to attack Islamberg.
“And they got caught? Good,” Allen said. “We’re not supposed to go around shooting people we don’t like. That’s why we have video games.”
Allen’s own likes and dislikes are complicated. He said he strongly opposes taking in refugees for humanitarian reasons, arguing only immigrants with needed skills be admitted. He also recounted befriending a Muslim engineer in Pakistan through a physics blog and urging him to move to Canada.
Shown one of his tweets from last year — claiming Czech officials had urged people to shoot Muslims — Allen shook his head.
“That’s not a good tweet,” he said quietly. “It’s inciting.”
Allen said he rarely read replies to his posts — and never noticed Vetromile’s.
“If I’d have seen anybody talking violence, I would have banned them,” he said.
He turned to his wife, Kim, preparing dinner across the kitchen counter. Maybe he should stop tweeting, he told her. But couldn’t he continue until Trump was reelected?
“We have a saying, ‘Oh, it must be true, I read it on the internet,’” Allen said, before showing his visitor out. “The internet is phony. It’s not there. Only kids live in it and old guys, you know what I mean? People with time on their hands.”
The next day, Allen shut down his account, and the long narrative he spun all but vanished.
Tech Helping Make Big Impact on Local Government
Local governments often try to solve problems using old technology. A U.S. Senate bill aims to fund small tech teams to help state and municipal governments update and rebuild government systems. Deana Mitchell takes a look at the impact on one program that is serving the needy.
…
Kenya Taps Into Technology to Attract Young People to Farms
Kenyan innovators are betting on digital technologies to attract young people to agriculture currently dominated by an aging population. With 98 percent mobile phone penetration, according to the latest data from the Communications Authority of Kenya, the cellphone is proving to be an important source of extension services in areas where such services are not available. Sarah Kimani reports for VOA from Kinoo, Kenya.
…
Student Scientists Helping to Monitor Air Quality
All too often what looks like haze is actually tiny particles in the air that are so small you can breathe them in, and they can be dangerous. Now a group of citizen scientists with help from the National Science Foundation is creating a network of sensors that could warn people when the air they breathe turns bad. VOA’s Kevin Enochs reports.
…
US Social Media Firms Scramble to Fight Fake News
As Notre Dame Cathedral burned, a posting on Facebook circulated – a grainy video of what appeared to be a man in traditional Muslim garb up in the cathedral.
Fact-checkers worldwide jumped into action and pointed out the video and postings were fake and the posts never went viral.
But this week, the Sri Lanka government temporarily shut down Facebook and other sites to stop the spread of misinformation in the wake of the Easter Sunday bombings in the country that killed more than 250 people. Last year, misinformation on Facebook was blamed for contributing to riots in the country.
Facebook, Twitter, YouTube and others are increasingly being held responsible for the content on their sites as the world tries to grapple in real time with events as they unfold. From lawmakers to the public, there has been a rising cry for the sites to do more to combat misinformation particularly if it targets certain groups.
Shift in sense of responsibility
For years, some critics of social media companies, such as Twitter, YouTube and Facebook, have accused them of having done the minimum to monitor and stamp out misinformation on their platforms. After all, the internet platforms are generally not legally responsible for the content there, thanks to a 1996 U.S. federal law that says they are not publishers. This law has been held up as a key protection for free expression online.
And, that legal protection has been key to the internet firms’ explosive growth. But there is a growing consensus that companies are ethically responsible for misleading content, particularly if the content has an audience and is being used to target certain groups.
Tuning into dog whistles
At a recent House Judiciary Committee hearing on white supremacy and hate crimes, Congresswoman Sylvia Garcia, a Texas Democrat, questioned representatives from Facebook and Google about their policies.
“What have you done to ensure that all your folks out there globally know the dog whistles, know the keywords, the phrasing, the things that people respond to, so we can be more responsive and be proactive in blocking some of this language?” Garcia asked.
Each company takes a different approach.
Facebook, which perhaps has had the most public reckoning over fake news, won’t say it’s a media company. But it has taken partial responsibility about the content on its site, said Daniel Funke, a reporter at the International Fact-Checking Network at the Poynter Institute.
The social networking giant uses a combination of technology and humans to address false posts and messages that appear to target groups. It is collaborating with outside fact-checkers to weed out objectionable content, and has hired thousands to grapple with content issues on its site.
Swamp of misinformation
Twitter has targeted bots, automatic accounts that spread falsehoods. But fake news often is born on Twitter and jumps to Facebook.
“They’ve done literally nothing to fight misinformation,” Funke said.
YouTube, owned by Google, has altered its algorithms to make it harder to find problematic videos, or embed code to make sure relevant factual content comes up higher in the search. YouTube is “such a swamp of misinformation just because there is so much there, and it lives on beyond the moment,” Funke said.
Other platforms of concern are Instagram and WhatsApp, both owned by Facebook.
Some say what the internet companies have done so far is not enough.
“To use a metaphor that’s often used in boxing, truth is against the ropes. It is getting pummeled,” said Sam Wineburg, an education professor at Stanford University.
What’s needed, he said, is for the companies to take full responsibility: “This is a mess we’ve created and we are going to devote resources that will lower the profits to shareholders, because it will require a deeper investment in our own company.”
Fact-checking and artificial intelligence
One of the fact-checking organizations that Facebook works with is FactCheck.org. It receives misinformation posts from Facebook and others. Its reporters check out the stories then report on their own site whether the information is true or false. That information goes back to Facebook as well.
Facebook is “then able to create a database now of bad actors, and they can start taking action against them,” said Eugene Kiely, director of FactCheck.org. Facebook has said it will make it harder to find posts by people or groups that continually post misinformation.
The groups will see less financial incentives, Kiely points out. “They’ll get less clicks and less advertising.”
Funke predicts companies will use technology to semi-automate fact-checking, making it better, faster and able to match the scale of misinformation.
That will cost money of course.
It also could slow the internet companies’ growth.
Does being more responsible mean making less money? Social media companies are likely to find out.
…
US Social Media Companies Pressed to Better Police Content
Social media companies such as Twitter, YouTube and Facebook are not legally responsible for the content that users upload to their sites. That legal protection has been key to their explosive growth, but there is a growing consensus that companies must do more to root out misleading content. Michelle Quinn reports, the companies may be taking action in the hope of avoiding stricter government regulation.
…
New Mexico Armed Border Group Barred from Facebook Fundraising
Facebook Inc on Thursday barred a New Mexico-based paramilitary group that has stopped undocumented migrants near the U.S.-Mexico border from using its fundraising tools and said it would remove any of its posts that violated company policies.
Facebook made the statement after a civil rights organization asked it to block videos posted by the United Constitutional Patriots (UCP), saying the clips violated its standards, which prohibit images showing criminal acts.
“People cannot use our fundraising tools for activities involving weapons,” said a Facebook spokesperson in a statement. “We will remove fundraisers this group may try to start on our service and any content that violates our Community Standards.”
Since February, the UCP has posted a string of videos showing members armed with semi-automatic rifles halting migrants in New Mexico and telling them to sit and wait for U.S. Border Patrol to arrest them.
The UCP says the videos demonstrate its work helping Border Patrol detain some 5,600 migrants in just 60 days during a surge in illegal crossings. Civil rights groups accuse the group of illegally detaining asylum-seekers.
“These videos include content showing possible assault, kidnapping and false imprisonment,” the Lawyers’ Committee for Civil Rights Under Law said in a statement Thursday asking Facebook to remove them.
UCP spokesman Jim Benvie did not immediately respond to a request for comment. In a Facebook Live post on Tuesday, he described the group’s videos as “citizen journalism” showing reality on the border.
“There is a crisis at the border, we are being invaded,” Benvie said.
Social media policies
Facebook’s Community Standards bar users from publicizing crime, using hate speech or presenting arguments for restricting immigration policy, among other things, the spokesperson said.
PayPal and GoFundMe on Friday barred the UCP, citing policies that prohibit the promotion of hate or violence.
New Mexico Governor Michelle Lujan Grisham last week called for an investigation of the group.
The FBI arrested the UCP’s commander, Larry Hopkins, on Saturday on federal weapons charges dating back to 2017.
Hopkins was assaulted in a New Mexico jail on Monday and hospitalized with broken ribs.
The UCP left its campsite Tuesday after Union Pacific Railroad accused it of trespassing, but Benvie said it would soon relocate to a nearby spot along the border.
“We’re not going to quit fighting, we’re not going to quit reporting,” he said.
…
Canada to Seek Court Order to Force Facebook to Follow Privacy Laws
Facebook Inc broke Canadian privacy laws when it collected the information of some 600,000 citizens, a top watchdog said on Thursday, pledging to seek a court order to force the social media giant to change its practices.
Privacy Commissioner Daniel Therrien made his comments while releasing the results of an investigation, opened a year ago, into a data sharing scandal involving Facebook and the now-defunct British political consulting firm Cambridge Analytica.
Though Facebook has acknowledged a “major breach of trust” in the Cambridge Analytica scandal, the company disputed the results of the Canadian probe, Therrien said.
“Facebook’s refusal to act responsibly is deeply troubling given the vast amount of sensitive personal information users have entrusted to this company,” said Therrien.
Specifically, the company refused to voluntarily submit to audits of its privacy policies and practices over the next five years, he said.
“The stark contradiction between Facebook’s public promises to mend its ways on privacy and its refusal to address the serious problems we’ve identified “or even acknowledge that it broke the law ” is extremely concerning,” he added.
Facebook was not immediately available for comment. The Office of the Privacy Commissioner does not have the power to levy financial penalties, but it can seek court orders to force an entity to follow its recommendations.
It could take a year to obtain a court order, Therrien said.
The investigation revealed there was an “overall lack of responsibility” with people’s personal information that means “there is a high risk that” their data “could be used in ways that they do not know or suspect, exposing them to potential harms.”
Apart from privacy violations by Facebook, the investigation also highlighted problems with regulating social media. Facebook’s rejection of the watchdog’s recommendations revealed “critical weaknesses” in the current legislation, Therrien added, urging lawmakers to give his office more sanctioning power.
“We should not count on all companies to act responsibility and therefore a new law should ensure a third party, a regulator, holds companies responsible,” Therrien said.
Canadian Democratic Institutions Minister Karina Gould, who this month said the government might have to regulate Facebook and other social media companies unless they did more to help combat foreign meddling in this October’s election, will react later on Thursday, a spokeswoman said.
Facebook said on Wednesday it had set aside $3 billion to cover a settlement with U.S. regulators probing revelations that the company had inappropriately shared information belonging to 87 million of its users with Cambridge Analytica.
…
Irish Regulator Opens Inquiry Into Facebook Over Password Storage
Facebook’s lead regulator in the European Union, Ireland’s Data Protection Commissioner, on Thursday said it had launched an inquiry into whether the company violated EU data rules by saving user passwords in plain text format on internal servers.
The probe is the latest to be launched out of Dublin into the social network giant. The Irish regulator in February said it had seven statutory inquiries into Facebook and three more into Facebook-owned Instagram and WhatsApp.
Facebook in March announced that it has resolved a glitch that exposed passwords of millions of users stored in readable format within its internal systems to its employees.
The passwords were accessible to as many as 20,000 Facebook employees and dated back as early as 2012, cyber security blog KrebsOnSecurity, which first reported the issue, said in its report.
“The Data Protection Commission (DPC) was notified by Facebook that it had discovered that hundreds of millions of user passwords, relating to users of Facebook, Facebook Lite and Instagram, were stored by Facebook in plain text format in its internal servers,” the DPC said in a statement.
“We have this week commenced a statutory inquiry in relation to this issue to determine whether Facebook has complied with its obligations under relevant provisions of the GDPR,” it added.
The DPC said in February that it expected to conclude the first of its investigations into Facebook’s use of personal data this summer and the remainder by the end of the year.
Ireland hosts the European headquarters of a number of U.S. technology firms. Under the EU’s General Data Protection Regulation’s (GDPR) “One Stop Shop”, the Irish commissioner is also the lead regulator for Twitter, LinkedIn Apple and Microsoft.
As part of regulations introduced last year, a firm found to have broken data processing and handling rules can be fined up to 4 percent of their global revenue of the prior financial year, or 20 million euros, whichever is higher.
Canada’s federal privacy commissioner on Thursday announced the results of a probe that found Facebook had committed serious contraventions of privacy law and failed to take responsibility for protecting the personal information of citizens.
…
Microsoft Surges Toward Trillion-Dollar Value as Profits Rise
Microsoft said profits climbed in the past quarter on its cloud and business services as the U.S. technology giant saw its market value close in on the trillion-dollar mark.
Profits in the quarter to March 31 rose 19 percent to $8.8 billion on revenues of $30.8 billion, an increase of 14 percent from the same period a year earlier.
Microsoft shares gained some 3% in after-hours trade, pushing it closer to $1 trillion in value.
It ended the session Wednesday with a market valuation of some $960 million, just behind Apple but ahead of Amazon.
In the fiscal third quarter, Microsoft showed its reliance on cloud computing and other business services which now drive its earnings, in contrast to its earlier days when it focused on consumer PC software.
“Leading organizations of every size in every industry trust the Microsoft cloud,” chief executive Satya Nadella said in a statement.
Commercial cloud revenue rose 41% from a year ago to $9.6 billion, which now makes up nearly a third of sales, Microsoft said.
Some $10.2 billion in revenue came from the “productivity and business services” unit which includes its Office software suite for both consumers and enterprises, and the LinkedIn professional social network.
The “more personal computing” unit which includes its Windows software, Surface devices and gaming operations generated $10.6 billion in the quarter.
…
Technology Ethics Campaigners Offer Plan to Fight ‘Human Downgrading’
Technology firms should do more to connect people in positive ways and steer away from trends that have tended to exploit human weaknesses, ethicists told a meeting of Silicon Valley leaders on Tuesday.
Tristan Harris and Aza Raskin are the co-founders of the nonprofit Center for Humane Technology and the ones who prompted Apple and Google to nudge phone users toward reducing their screen time.
Now they want companies and regulators to focus on reversing what they called “human downgrading,” which they see as at the root of a dozen worsening problems, by reconsidering the design and financial incentives of their systems.
Before a hand-picked crowd of about 300 technologists, philanthropists and others concerned with issues such as internet addiction, political polarization, and the spread of misinformation on the web, Harris said Silicon Valley was too focused on making computers surpass human strengths, rather than worrying about how they already exploit human weaknesses.
If that is not reversed, he said, “that could be the end of human agency,” or free will.
Problems include the spread of hate speech and conspiracy theories, propelled by financial incentives to keep users engaged alongside the use of powerful artificial intelligence on platforms like Alphabet Inc’s YouTube, Harris said.
YouTube and other companies have said they are cracking down on extremist speech and have removed advertising revenue-sharing from some categories of content.
Active Facebook communities can be a force for good but they also aid the dissemination of false information, the campaigners said. For example, a vocal fringe that oppose vaccines, believing contrary to scientific evidence that they cause autism, has led to an uptick in diseases that were nearly eradicated.
Facebook said in March it would reduce the distribution of content from groups promoting vaccine hoaxes.
In an interview after his speech, Harris said that what he has called a race to the bottom of the brainstem – manipulation of human instincts and emotions – could be reversed.
For example, he said that Apple and Google could reward app developers who help users, or Facebook could suggest that someone showing signs of depression call a friend who had previously been supportive.
Tech personalities attending included Apple co-founder Steve Wozniak, early Facebook funder turned critic Roger McNamee and MoveOn founders Joan Blades and Wes Boyd. Tech money is also backing the Center, including charitable funds started by founders of Hewlett Packard, EBay, and Craigslist.
The big companies, Harris said, “can change the incentives.”
…
New Zealand, France Plan Bid to Tackle Extremism on Social Media
In the wake of the Christchurch attack, New Zealand said on Wednesday that it would work with France in an effort to stop social media from being used to promote terrorism and violent extremism.
Prime Minister Jacinda Ardern said in a statement that she will co-chair a meeting with French President Emmanuel Macron in Paris on May 15 that will seek to have world leaders and CEOs of tech companies agree to a pledge, called the Christchurch Call, to eliminate terrorist and violent extremist content online.
A lone gunman killed 50 people at two mosques in Christchurch on March 15, while livestreaming the massacre on Facebook.
Brenton Tarrant, 28, a suspected white supremacist, has been charged with 50 counts of murder for the mass shooting.
“It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism,” Ardern said in the statement.
“This meeting presents an opportunity for an act of unity between governments and the tech companies,” she added.
The meeting will be held alongside the Tech for Humanity meeting of G7 digital ministers, of which France is the chair, and France’s separate Tech for Good summit, both on 15 May, the statement said.
Ardern said at a press conference later on Wednesday that she has spoken with executives from a number of tech firms including Facebook, Twitter, Microsoft, Google and few other companies.
“The response I’ve received has been positive. No tech company, just like no government, would like to see violent extremism and terrorism online,” Ardern said at the media briefing, adding that she had also spoken with Facebook’s Mark Zuckerberg directly on the topic.
A Facebook spokesman said the company looks forward to collaborating with government, industry and safety experts on a clear framework of rules.
“We’re evaluating how we can best support this effort and who among top Facebook executives will attend,” the spokesman said in a statement sent by email.
Facebook, the world’s largest social network with 2.7 billion users, has faced criticism since the Christchurch attack that it failed to tackle extremism.
One of the main groups representing Muslims in France has said it was suing Facebook and YouTube, a unit of Alphabet’s Google, accusing them of inciting violence by allowing the streaming of the Christchurch massacre on their platforms.
Facebook Chief Operating Officer Sheryl Sandberg said last month that the company was looking to place restrictions on who can go live on its platform based on certain criteria.
…
US White Nationalists Barred by Facebook Find Haven on Russia Site
With U.S. social media companies tightening their content policies in the wake of the recent mosque shootings in New Zealand, some extremist groups are getting pushed to the margins of the internet. Researchers say that has turned Russian social media platforms such as VKontakte, or VK, into safe harbors for an ever greater number of white nationalists seeking to communicate with each other and get their messages out. VOA’s Anush Avetisyan has more.
…
Multisensory VR Allows Users to Step Into a Movie and Interact With Objects
Imagine stepping into a movie or virtual world and being able to interact with what’s there. That’s now possible through the magic of Hollywood combined with virtual reality technology. For $20, the company Dreamscape takes visitors through a multi-sensory journey. Currently in Los Angeles, creators say they plan on opening more virtual reality venues across the U.S. and eventually to other countries. VOA’s Elizabeth Lee shows us what to expect.
…
Tesla Shows Off Self-Driving Technology to Investors
Tesla broadcast a web presentation on Monday to update investors about its self-driving strategy as Chief Executive Elon Musk tries to show that the electric car maker’s massive investment in the sector will pay off.
Global carmakers, large technology companies and an array of startups are developing self-driving — including Alphabet Inc’s Waymo and Uber Technologies Inc — but experts say it will be years before the systems are ready for deployment.
Musk previously forecast that by 2018 cars would go “from your driveway to work without you touching anything.” Teslas still require human intervention and are not considered fully self-driving, according to industry standards.
The webcast, scheduled to begin at 11 a.m. PT (1800 GMT), was delayed and Tesla showed a repeating video of its vehicles for 30 minutes.
Teslas have been involved in a handful of crashes, some of them fatal, involving the use of the company’s AutoPilot system.
The system has automatic steering and cruise control but requires driver attention at the wheel. Tesla has been criticized by safety groups for being unclear about the need for “hands-on” driving.
The company also sells a “full self-driving option” for an additional $5,000, explained on Tesla’s website as “automatic driving from highway on-ramp to off-ramp,” automatic lane changes, the ability to autopark and to summon a parked car.
Coming later in 2019 is the ability to recognize traffic lights and stop signs, and perform automatic driving on city streets, says Tesla.
But Tesla’s use of the term “full self-driving” still garners criticism, as the option is not yet “Level 4,” or fully autonomous by industry standards, in which the car can handle all aspects of driving in most circumstances with no human intervention.
Tesla says its cars have the necessary hardware for full self-driving in most circumstances, and Musk said in February he was certain that Tesla would be “feature complete” for full self-driving in 2019, although drivers would still need to pay attention until the system’s reliability improved.
Tesla reports first-quarter earnings on Wednesday. That is also the deadline by which Musk and the U.S. Securities and Exchange Commission are supposed to settle their dispute over Musk’s use of Twitter.
…
Ford Unveils New Electric Fleet
Ford is showing off its new fleet of electric vehicles. Some of the standouts include a new hybrid plug-in and the promise of a new, all-electric model by 2022. The U.S. automaker plans to have a fleet of 40 different electric vehicles on the roads in the next three years. VOA’S Kevin Enochs reports.
…
The Future of Farming: Robots Tend Crops and Bovines Go 5G
British agriculture is going high-tech. Farmers recently tested cutting-edge technology like robots that autonomously tend fields and wireless cattle that may connect faster to the farm than you to your favorite app. Incoming message from Arash Arabasadi.
…
China’s Political System Helps Advance Its Artificial Intelligence
Recent technological advances demonstrated by China have started an intense debate on whether it is set to take a lead in the field of artificial intelligence, or AI, which has extensive business and military applications.
U.S. concerns about China’s AI advances have also influenced, in part, the ongoing trade negotiations between Washington and Beijing. Both the United States and European Union are taking measures to stop information leaks that are reportedly helping Chinese companies at the expense of Western business.
But many analysts are saying that Chinese corporate and defense-related research in areas like AI and 5G wireless technologies can thrive on their own even if information from the Western world is shut off. China is already reportedly leading in several segments of businesses like autonomous vehicles, facial recognition and certain kinds of drones.
The U.S.-based Allen Institute of Artificial Intelligence recently captured attention when it reported that China is a close second after the United States when it comes to producing frequently-cited research papers on artificial intelligence. The U.S. contribution is 29%, and China accounts for 26% of such papers.
“The U.S. still is ahead in AI development capabilities, but the gap between the U.S. and China is closing rapidly because of the significant new AI investments in China,” Bart Selman, president-elect of the Association for the Advancement of Artificial Intelligence, a professional organization, told VOA.
Political advantage
Chinese President Xi Jinping has in recent months encouraged Communist leaders to “ensure that our country marches in the front ranks when it comes to theoretical research in this important area of AI, and occupies the high ground in critical and AI core technologies.” He also asked them to “ensure that critical and core AI technologies are firmly grasped in our own hands.”
Analysts said China’s political system and its government’s eagerness to support the technological advancement were key reasons it could build infrastructure such as cloud computing and a software engineering workforce, and become a big player in artificial intelligence.
Chinese companies enjoy special advantages in deploying new technology like facial recognition, which is often difficult in democratic countries like the U.S., said William Carter, deputy director and fellow in the Technology Policy Program at the Center for Strategic and International Studies.
“China does have strengths in terms of application development and deployment, and has the potential to take the lead in the deployment of some technologies like autonomous vehicles and facial recognition where ethical, social and policy hurdles may impede deployment in the U.S. and other parts of the world,” Carter said.
China’s capabilities in image and facial recognition are possibly the best in the world, partly because government controls have made it easier to generate data from a wide range of sources like banks, mobile phone companies and social media.
“These capabilities arise out of the use of deep learning on very large data sets. In general, China has the advantage of having more real world data to train AI systems on … than any other country,” Selman said.
Other areas where China has shown significant advances are natural language processing (in Chinese only) and drone (unmanned aerial vehicle) swarming.
“China also has unique capabilities that are not found in the U.S. or Europe. I’m thinking of electronic payment platforms [e.g. AliPay] and the super app WeChat that provide an advanced platform for the rapid introduction of further AI technologies,” Selman said.
U.S. role
Last February, U.S. President Donald Trump signed an executive order asking government agencies to do more with AI.
“Continued American leadership in artificial intelligence is of paramount importance to maintaining the economic and national security of the United States,” Trump was quoted as saying in an official press release accompanying the order.
Critics have said that Trump’s order does not suggest enhanced government investment and plans for attracting fresh talent in AI research and development, which is essential for growth and industry competition.
Gregory Allen is an adjunct senior fellow with the research group Center for a New American Security. He was recently quoted as saying that the U.S. Defense Advanced Research Projects Agency is spending the most on research and development at $2 billion over five years. In contrast, the Chinese province of Shanghai, which is a city government, is planning to spend $15 billion on AI over 10 years.
“So literally, we have the U.S. federal government at present at risk of being outspent by a provincial government of China,” Allen said.
China’s AI capabilities have limits. They suffer from major weaknesses in areas like advanced semiconductors to support machine learning applications.
“At the end of the day, when it comes to most major AI fields, China is not the technological leader and is not the source of most foundational innovations,” Carter said.
The U.S. still dominates in the overall market for self-driving car technology, machine translation, natural language understanding, and web search. China has gained a strong presence in a few segments of these businesses, largely because of its vast domestic market.
Despite the competition, collaboration and exchange of ideas occur between the two countries in the AI field, although this aspect is less discussed, Carter added.
“Politically, the dynamic is more competitive; economically and scientifically, it is more collaborative,” he said.
…
NASA Launches Chicago Planetarium’s Student Project into Space
College student Fatima Guerra, 19, will be the first to admit, she’s into some really nerdy stuff.
“Like, up there nerdy.”
“Way up there nerdy,” she says. “All the way up into space.”
Guerra is an astronomer in training, involved since a high school internship with a small project at the Adler Planetarium, with big goals.
“Our main goal was to see if the ozone layer is getting thinner and by how much, and if there is different parts of the Earth’s atmosphere getting thinner because of the pollution and greenhouse gases,” she told VOA from the laboratory at the Adler where she often works.
Coding ThinSat
Data that sheds light on those circumstances is gathered by a small electronic device called “ThinSat” designed to orbit the Earth. It is developed not by high-paid engineers and software programmers, but by Chicago-area students like Guerra.
“We focused on coding the different parts of the sensors that the ThinSat is composed of. So, we coded so that it can measure light intensity, pressure.”
“This stuff is very nerdy,” Jesus Garcia admits with a chuckle.
“What we hope to accomplish is look at Earth from space as if it was the very first exoplanet that we have. So, imagine that we are looking at the very first images from a very distant planet.”
As a systems engineer, Garcia oversees the work of the students developing ThinSat for the Adler’s Far Horizon’s Project, which he outlines “bring all types of students, volunteers and our staff to develop projects, engineering projects, that allow us to answer scientific questions.”
Garcia says the students he works with on the project cross national, racial and cultural divides to work toward a common goal.
“Here at the Adler, we have students who are minorities who have been faced with challenges of not having opportunities presented to them,” he said. “And here we are presenting a mission where they are collaborating with us scientists and engineers on our first mission that is going into space.”
Rocket carries project into space
As the NASA-owned, Northrop Grumann-developed Antares rocket successfully blasted off from the coast of Virginia on April 17, it wasn’t just making a resupply mission to the International Space Station.
On board was ThinSat, the culmination of work by many at the Adler, including Guerra, who joined the Far Horizons team as a high school requirement that ended up becoming much more.
“A requirement can become a life-changing opportunity, and you don’t even know it,” she told VOA. “It’s really exciting to see, or to know, especially, that my work is going to go up into space and help in the scientific world.”
Daughter of immigrants
It is also exciting for her parents, immigrants from Guatemala, who can boast that their daughter is one of the few who can claim to have built a satellite orbiting the Earth.
“I told them it might become a worldwide type of news, and I’m going to be a part of it. And they were really proud. And they were calling my family over there and saying, ‘She might be on TV.’ And it’s something they really feel a part of me about,” Guerra said.
Long after the data compiled by ThinSat is complete, Guerro will still have a place in history as a member of a team that put the first satellite developed by a private planetarium into space.
She says her friends don’t think that’s nerdy at all.
“It’s cool, because it’s interesting to see that something so nerdy is actually going to work, and is going to go up into something so important,” she said.
…
NASA Rocket Carries Adler Planetarium Student Project into Space
For some, the launch of a rocket into space to resupply the International Space Station is a routine mission. But for a group of students working with the Adler Planetarium in Chicago, it is a historic moment marking the achievement of a lifetime. VOA’s Kane Farabaugh has more from Chicago.
…
Facebook ‘Unintentionally’ Uploaded Email Contacts of 1.5 Million Users
Facebook Inc said on Wednesday it may have “unintentionally uploaded” email contacts of 1.5 million new users since May 2016, in what seems to be the latest privacy-related issue faced by the social media company.
In March, Facebook had stopped offering email password verification as an option for people who signed up for the first time, the company said. There were cases in which email contacts of people were uploaded to Facebook when they created their account, the company said.
“We estimate that up to 1.5 million people’s email contacts may have been uploaded. These contacts were not shared with anyone and we are deleting them,” Facebook told Reuters, adding that users whose contacts were imported will be notified.
The underlying glitch has been fixed, according to the company statement. Business Insider had earlier reported that the social media company harvested email contacts of the users without their knowledge or consent when they opened their accounts.
When an email password was entered, a message popped up saying it was “importing” contacts without asking for permission first, the report said.
Facebook has been hit by a number of privacy-related issues recently, including a glitch that exposed passwords of millions of users stored in readable format within its internal systems to its employees.
Last year, the company came under fire following revelations that Cambridge Analytica, a British political consulting firm, obtained personal data of millions of people’s Facebook profiles without their consent.
The company has also been facing criticism from lawmakers across the world for what has been seen by some as tricking people into giving personal data to Facebook and for the presence of hate speech and data portability on the platform.
Separately, Facebook was asked to ensure its social media platform is not abused for political purposes or to spread misinformation during elections.
…
Google: Android Users Get Browser, Search Options in EU Case
Google said Thursday it will start giving European Union smartphone users a choice of browsers and search apps on its Android operating system, in changes designed to comply with an EU antitrust ruling.
Following an Android update, users will be shown two new screens giving them the new options, Google product management director Paul Gennai said in a blog post.
The EU’s executive Commission slapped the Silicon Valley giant with a record 4.34 billion euro (then $5 billion) antitrust fine in July after finding that it abused the dominance of Android by forcing handset and tablet makers to install Google apps, reducing consumer choice.
The commission had ordered Google to come up with a remedy or face further fines. The company, which is appealing the ruling, said the changes are being rolled out over the next few weeks to both new and existing Android phones in Europe.
Android users who open the Google Play store after the update will be given the option to install up to five search apps and five browsers, Gennai said. Apps will be included based on their popularity and shown in random order. Users who choose a search app will also be asked if they want to change the default search engine in the phone’s Chrome browser.
Android is the most widely used mobile operating system, beating even Apple’s iOS.
…
Samsung to Investigate Reports of Galaxy Fold Screen Problems
South Korea’s Samsung Electronics Co. Ltd. said it has received a few reports of damage to the main display of samples of its upcoming foldable smartphone and that it will investigate.
Some tech reviewers of the Galaxy Fold, a splashy $1,980 phone that opens into a tablet and that goes on sale in the United States April 26, said the phone malfunctioned after only a day or two of use.
“We will thoroughly inspect these units in person to determine the cause of the matter,” Samsung said in a statement, noting that a limited number of early Galaxy Fold samples were provided to media for review.
Screen cracking, flickering
The problem seems to be related to the unit’s screen either cracking or flickering, according to Twitter posts by technology journalists from Bloomberg, The Verge and CNBC who received the phone this week for review purposes.
Samsung, which has advertised the phone as “the future,” said removing a protective layer of its main display might cause damage, and that it will clearly inform customers such.
The company said it has closed pre-orders for the Galaxy Fold because of “high demand.” It told Reuters there is no change to its release schedule following the malfunction reports.
From phone to tablet
The South Korean company’s Galaxy Fold resembles a conventional smartphone but opens like a book to reveal a second display the size of a small tablet at 7.3 inches (18.5 cm).
Although Galaxy Fold and Huawei Technologies Co Ltd.’s Mate X foldable phones are not expected to be big sellers, the new designs were hailed as framing the future of smartphones this year in a field that has seen few surprises since Apple Inc. introduced the screen slab iPhone in 2007.
The problems with the new phone drew comparisons to Samsung’s Galaxy Note 7 phone in 2016. Battery and design flaws in the Note 7 led to some units catching fire or exploding, forcing Samsung to recall and cancel sales of the phone. The recall wiped out nearly all of the profit in Samsung’s mobile division in the third quarter of 2016.
Samsung has said it plans to churn out at least 1 million foldable Galaxy Fold handsets globally, compared with its total estimated 300 million mobile phones it produces annually.
Reviewers puzzled
Reviewers of the new Galaxy Fold said they did not know what the problem was and Samsung did not provide answers.
Bloomberg reporter Mark Gurman tweeted: “The screen on my Galaxy Fold review unit is completely broken and unusable just two days in. Hard to know if this is widespread or not.”
According to Gurman’s tweets, he removed a plastic layer on the screen that was not meant to be removed and the phone malfunctioned afterward.
Dieter Bohn, executive editor of The Verge, said that a “small bulge” appeared on the crease of the phone screen, which appeared to be something pressing from underneath the screen.
Bohn said Samsung replaced his test phone but did not offer a reason for the problem.
“It is very troubling,” Bohn told Reuters, adding that he did not remove the plastic screen cover.
Steve Kovach, tech editor at CNBC.com tweeted a video of half of his phone’s screen flickering after using it for just a day.
…