Category Archives: Technology

silicon valley & technology news

FOMO at SXSW: How to Conquer Fear of Missing Out in Austin

The South by Southwest festival in Austin, Texas, starts Friday. It’s grown from a grassroots event to a phenomenon that attracts 400,000 people.

For attendees, it can feel overwhelming. What’s worth your time? Where’s the buzz?

 

The latest AP Travel “Get Outta Here” podcast offers strategies for conquering FOMO (fear of missing out) at SXSW.

 

One approach is to let the nostalgia acts go – the former big-name bands promoting comebacks. Instead, pack your schedule with artists that have their best years ahead of them.

 

And you need a plan. You can’t just wing it. Be ready for long lines. But have some backups. Consider less-crowded venues outside downtown. Film screenings take place at theaters all over, and up-and-coming bands play a lot of shows.

$1*/ mo hosting! Get going with us!

Facebook, Twitter Urged to Do More to Police Hate on Sites

Tech giants Facebook, Twitter and Google are taking steps to police terrorists and hate groups on their sites, but more work needs to be done, the Simon Wiesenthal Center said Tuesday.

The organization released its annual digital terrorism and hate report card and gave a B-plus to Facebook, a B-minus to Twitter and a C-plus to Google.

Facebook spokeswoman Christine Chen said the company had no comment on the report. Representatives for Google and Twitter did not immediately return emails seeking comment.

Rabbi Abraham Cooper, the Wiesenthal Center’s associate dean, said Facebook in particular built “a recognition that bad folks might try to use their platform” into its business model. “There is plenty of material they haven’t dealt with to our satisfaction, but overall, especially in terms of hate, there’s zero tolerance,” Cooper said at a New York City news conference.

Rick Eaton, a senior researcher at the Wiesenthal Center, said hateful and violent posts on Instagram, which is part of Facebook, are quickly removed, but not before they can be widely shared.

He pointed to Instagram posts threatening terror attacks at the upcoming World Cup in Moscow. Another post promoted suicide attacks with the message, “You only die once. Why not make it martyrdom.”

Cooper said Twitter used to merit an F rating before it started cracking down on Islamic State tweets in 2016. He said the move came after testimony before a congressional committee revealed that “ISIS was delivering 200,000 tweets a day.”

Cooper and Eaton said that as the big tech companies have gotten more aggressive in shutting down accounts that promote terrorism, racism and anti-Semitism, promoters of terrorism and hate have migrated to other sites such as VK.com, a Facebook lookalike that’s based in Russia.

There also are “alt-tech” sites like GoyFundMe, an alternative to GoFundMe, and BitChute, an alternative to Google-owned YouTube, Cooper said.

“If there’s an existing company that will give them a platform without looking too much at the content, they’ll use it,” he said. “But if not, they are attracted to those platforms that have basically no rules.”

The Los Angeles-based Wiesenthal Center is dedicated to fighting anti-Semitism, hate and terrorism.

$1*/ mo hosting! Get going with us!

Facebook, Twitter Urged to Do More to Police Hate on Sites

Tech giants Facebook, Twitter and Google are taking steps to police terrorists and hate groups on their sites, but more work needs to be done, the Simon Wiesenthal Center said Tuesday.

The organization released its annual digital terrorism and hate report card and gave a B-plus to Facebook, a B-minus to Twitter and a C-plus to Google.

Facebook spokeswoman Christine Chen said the company had no comment on the report. Representatives for Google and Twitter did not immediately return emails seeking comment.

Rabbi Abraham Cooper, the Wiesenthal Center’s associate dean, said Facebook in particular built “a recognition that bad folks might try to use their platform” into its business model. “There is plenty of material they haven’t dealt with to our satisfaction, but overall, especially in terms of hate, there’s zero tolerance,” Cooper said at a New York City news conference.

Rick Eaton, a senior researcher at the Wiesenthal Center, said hateful and violent posts on Instagram, which is part of Facebook, are quickly removed, but not before they can be widely shared.

He pointed to Instagram posts threatening terror attacks at the upcoming World Cup in Moscow. Another post promoted suicide attacks with the message, “You only die once. Why not make it martyrdom.”

Cooper said Twitter used to merit an F rating before it started cracking down on Islamic State tweets in 2016. He said the move came after testimony before a congressional committee revealed that “ISIS was delivering 200,000 tweets a day.”

Cooper and Eaton said that as the big tech companies have gotten more aggressive in shutting down accounts that promote terrorism, racism and anti-Semitism, promoters of terrorism and hate have migrated to other sites such as VK.com, a Facebook lookalike that’s based in Russia.

There also are “alt-tech” sites like GoyFundMe, an alternative to GoFundMe, and BitChute, an alternative to Google-owned YouTube, Cooper said.

“If there’s an existing company that will give them a platform without looking too much at the content, they’ll use it,” he said. “But if not, they are attracted to those platforms that have basically no rules.”

The Los Angeles-based Wiesenthal Center is dedicated to fighting anti-Semitism, hate and terrorism.

$1*/ mo hosting! Get going with us!

Porsche Says Flying Cab Technology Could Be Ready Within Decade

Porsche is studying flying passenger vehicles but expects it could take up to a decade to finalize technology before they can launch in real traffic, its head of development said Tuesday.

Volkswagen’s sports car division is in the early stages of drawing up a blueprint of a flying taxi as it ponders new mobility solutions for congested urban areas, Porsche R&D chief Michael Steiner said at the Geneva auto show.

The maker of the 911 sports car would join a raft of companies working on designs for flying cars in anticipation of a shift in the transport market toward self-driving vehicles and on-demand digital mobility services.

“We are looking into how individual mobility can take place in congested areas where today and in the future it is unlikely that everyone can drive the way he wants,” Steiner said in an interview.

VW’s auto designer Italdesign and Airbus exhibited an evolved version of the two-seater flying car called Pop.Up at the Geneva show. It is designed to avoid gridlock on city roads and premiered at the annual industry gathering a year ago.

Separately, Porsche expects the cross-utility variant of its all-electric Mission E sports car to attract at least 20,000 buyers if it gets approved for production, Steiner said.

Porsche will decide later this year whether to build the Mission E Cross Turismo concept, which surges to 100 kph (62 mph) in less than 3.5 seconds, he said.

$1*/ mo hosting! Get going with us!

AI Has a Dirty Little Secret: It’s Powered by People

There’s a dirty little secret about artificial intelligence: It’s powered by an army of real people.

From makeup artists in Venezuela to women in conservative parts of India, people around the world are doing the digital equivalent of needlework -drawing boxes around cars in street photos, tagging images, and transcribing snatches of speech that computers can’t quite make out.

Such data feeds directly into “machine learning” algorithms that help self-driving cars wind through traffic and let Alexa figure out that you want the lights on. Many such technologies wouldn’t work without massive quantities of this human-labeled data.

These repetitive tasks pay pennies apiece. But in bulk, this work can offer a decent wage in many parts of the world – even in the U.S. And it underpins a technology that could change humanity forever: AI that will drive us around, execute verbal commands without flaw, and – possibly – one day think on its own.

For more than a decade, Google has used people to rate the accuracy of its search results. More recently, investors have poured tens of millions of dollars into startups like Mighty AI and CrowdFlower, which are developing software that makes it easier to label photos and other data, even on smartphones.

Venture capitalist S. “Soma” Somasegar says he sees “billions of dollars of opportunity” in servicing the needs of machine learning algorithms. His firm, Madrona Venture Group, invested in Mighty AI. Humans will be in the loop “for a long, long, long time to come,” he says.

Accurate labeling could make the difference between a self-driving car distinguishing between the sky and the side of a truck – a distinction Tesla’s Model S failed in the first known fatality involving self-driving systems in 2016.

“We’re not building a system to play a game, we’re building a system to save lives,” says Mighty AI CEO Daryn Nakhuda.

Marjorie Aguilar, a 31-year-old freelance makeup artist in Maracaibo, Venezuela, spends four to six hours a day drawing boxes around traffic objects to help train self-driving systems for Mighty AI.

She earns about 50 cents an hour, but in a crisis-wracked country with runaway inflation, just a few hours’ work can pay a month’s rent in bolivars.

“It doesn’t sound like a lot of money, but for me it’s pretty decent,” she says. “You can imagine how important it is for me getting paid in U.S. dollars.”

Aria Khrisna, a 36-year-old father of three in Tegal, Indonesia, says that adding word tags to clothing pictures on websites such as eBay and Amazon pays him about $100 a month, roughly half his income.

And for 25-year-old Shamima Khatoon, her job annotating cars, lane markers and traffic lights at an all-female outpost of data-labeling company iMerit in Metiabruz, India, represents the only chance she has to work outside the home in her conservative Muslim community.

“It’s a good platform to increase your skills and support your family,” she says.

The benefits of greater accuracy can be immediate. At InterContinental Hotels Group, every call that its digital assistant Amelia can take from a human saves $5 to $10, says information technology director Scot Whigham.

When Amelia fails, the program listens while a call is rerouted to one of about 60 service desk workers. It learns from their response and tries the technique out on the next call, freeing up human employees to do other things.

When a computer can’t make out a customer call to the Hyatt Hotels chain, an audio snippet is sent to AI-powered call center Interactions in an old brick building in Franklin, Massachusetts. There, while the customer waits on the phone, one of a roomful of headphone-wearing “intent analysts” transcribes everything from misheard numbers to profanity and quickly directs the computer how to respond.

That information feeds back into the system. “Next time through, we’ve got a better chance of being successful,” says Robert Nagle, Interactions’ chief technology officer.

Researchers have tried to find workarounds to human-labeled data, often without success.

In a project that used Google Street View images of parked cars to estimate the demographic makeup of neighborhoods, then-Stanford researcher Timnit Gebru tried to train her AI by scraping Craigslist photos of cars for sale that were labeled by their owners.

But the product shots didn’t look anything like the car images in Street View, and the program couldn’t recognize them. In the end, she says, she spent $35,000 to hire auto dealer experts to label her data.

Trevor Darrell, a machine learning expert at the University of California Berkeley, says he expects it will be five to 10 years before computer algorithms can learn to perform without the need for human labeling. His group alone spends hundreds of thousands of dollars a year paying people to annotate images.

$1*/ mo hosting! Get going with us!

Applications for Facial Recognition Increase as Technology Matures

From shopping centers and airports to concert venues and mobile phones, facial recognition technology can now be used in all of them due to advances in technology. 

Countries including China and the United States are developing, testing and using facial recognition technology. At the Los Angeles International Airport, the U.S. Transportation Security Administration, or TSA, has been trying out several security devices, including a facial recognition technology that takes a picture of the passenger and compares it to the passport picture just before he or she goes through airport security. 

“We’re always looking at technology, processes, even doctrine changes on how to better our security at an airport,” said Steve Karoly, acting assistant administrator with the TSA’s Office of Requirements and Capabilities Analysis.

If any one of the technologies being tested is implemented in the future, it will take two to three years for the TSA to install them in U.S. airports.

 

It is just one way facial recognition technology can be used for security.

Software to aid authorities

U.S. company FaceFirst has developed facial recognition software that can help police. Officers can take a picture of a suspect with a smartphone. The photo then can be compared to a database to see whether the person has a criminal history. The software can also be used in a private facility or store.

“We install a complete solution that allows our customers to be able to match people who are entering a facility against a database that already exists of bad people and so if there is a match that occurs, we’re able to send an alert to a mobile device like an iPhone or an Android phone in near real time,” said Peter Trepp, FaceFirst’s chief executive officer.

Beijing start-up Horizon Robotics showed off its facial recognition applications at the Consumer Electronics Show in Las Vegas earlier this year by showing a crowded street in China as well as a store where faces are being captured.

“For surveillance, you can catch the face in the public and find what you want to find, and for the commercial use, you can find VIPs [very important persons] when they come to the store, and so you can have special service for them,” said Hao Yuan Gao of Horizon Robotics.

Strides in facial recognition technology

 

Facial recognition technology has made great advances in recent years. 

 

“That has a lot to do with – computers are finally fast enough. We have GPUs (graphics processing units) and hardware that is fast enough to process all the data that we need to process,” said FaceFirst’s Trepp.

 

Machines can now match faces that are not in a controlled environment with good lighting and a full shot. A side shot or moving image of the face may be enough for artificial intelligence to make a match.

“Where this is going is very exciting. We think about everyday items that we have that are going away. Our house keys, our car keys, our ATM cards, our passwords are all starting to go away and instead, we’re going to be using facial recognition. Smartphones, of course are now using facial recognition. Laptops have facial recognition on them,” Trepp said.

“I think the fact that you can use this in uncontrolled environments makes it a much more interesting technology commercially,” said Prem Natarajan, research professor of computer science and the Michael Keston executive director at the University of Southern California Information Sciences Institute.

The idea of privacy

The ability to capture an image of a person without consent is a gray area when it comes to privacy, especially when many smartphones now have facial recognition in them so photos taken by the phone can pull up faces of friends with a timestamp and location information. On social media, such as Facebook, faces can be tagged. It is technology widely accepted and used by many people.

“There is visual of you, who you’re with, so it’s no longer just about your privacy. Whoever you’re with, the photos you’re taking of them, it’s like secondhand smoking – everybody you take a selfie with, etc., you’ve compromised as an individual their privacy, too, in some sense and we’re not seeking consent from any of them,” Natarajan said.

However, the idea of privacy has evolved for the younger generation, who have grown up with the internet and social media.

“The new generation, I think, has a different perspective on privacy than we do. My kids, your kids, all of our kids are growing up in a much more shared experience world,” Natarajan added. “My biggest privacy concern is actually not the government, it’s the big companies where there are really no limits on how they can share data, what they can use it for, how they can exploit it.”

 

“It’s a powerful tool and with power comes responsibility,” Trepp said.

FaceFirst designed privacy into its software. The company says as the default setting, its surveillance footage of unknown people is automatically purged from the system at regular intervals.

Facial recognition researchers say a social framework should be created to guide the use of this technology so it can be used safely to benefit society and not exploit it.

$1*/ mo hosting! Get going with us!