If you want your technology sector to expand rapidly, it pays to have strong support from the government, easy access to bank loans and a large market, hungry for your products. All this is available in China, where technology companies are expanding at a rapid pace — making other countries, including the U.S. — a bit uneasy. VOA’s George Putic reports.
…
All posts by MTechnology
Can a River Model Save Eroding Mississippi Delta?
Thousands of years of sediment carried by the Mississippi River created 25,000 square kilometers of land, marsh and wetlands along Louisiana’s coast. But engineering projects stopped the flow of sediment and rising seas thanks to climate change have made the Mississippi Delta the fastest-disappearing land on earth. Louisiana State University researchers created the river system in miniature to try to stop the erosion and rebuild the delta. Faith Lapidus narrates this report from Deborah Block.
…
Can a River Model Save Eroding Mississippi Delta?
Thousands of years of sediment carried by the Mississippi River created 25,000 square kilometers of land, marsh and wetlands along Louisiana’s coast. But engineering projects stopped the flow of sediment and rising seas thanks to climate change have made the Mississippi Delta the fastest-disappearing land on earth. Louisiana State University researchers created the river system in miniature to try to stop the erosion and rebuild the delta. Faith Lapidus narrates this report from Deborah Block.
…
Genetics Help Spot Food Contamination
A new approach for detecting food poisoning is being used to investigate the recent outbreak of E.coli bacteria in romaine lettuce grown in the U.S. state of Arizona. The tainted produce has sickened at least 84 people in 19 states. The new method, used by the Centers for Disease Control and Prevention, relies on genetic sequencing. And as Faiza Elmasry tells us, it has the potential to revolutionize the detection of food poisoning outbreaks. VOA’s Faith Lapidus narrates.
…
EU Piles Pressure on Social Media Over Fake News
Tech giants such as Facebook and Google must step up efforts to tackle the spread of fake news online in the next few months or potentially face further EU regulation, as concerns mount over election interference.
The European Commission said on Thursday it would draw up a Code of Practice on Disinformation for the 28-nation EU by July with measures to prevent the spread of fake news such as increasing scrutiny of advertisement placements.
EU policymakers are particularly worried that the spread of fake news could interfere with European elections next year, after Facebook disclosed that Russia tried to influence U.S. voters through the social network in the run-up to the 2016 U.S. election. Moscow denies such claims.
“These [online] platforms have so far failed to act proportionately, falling short of the challenge posed by disinformation and the manipulative use of platforms’ infrastructure,” the Commission wrote in its strategy for tackling fake news published on Thursday.
“The Commission calls upon platforms to decisively step up their efforts to tackle online disinformation.”
Advertisers and online platforms should produce “measurable effects” on the code of practice by October, failing which the Commission could propose further actions, including regulation “targeted at a few platforms.”
Companies will have to work harder to close fake accounts, take steps to reduce revenues for purveyors of disinformation and limit targeting options for political adverts.
The Commission, the EU’s executive, will also support the creation of an independent European network of fact-checkers and launch an online platform on disinformation.
Tech industry association CCIA said the October deadline for progress appeared rushed.
“The tech industry takes the spread of disinformation online very seriously…when drafting the Code of Practice, it is important to recognize that there is no one-size-fits-all solution to address this issue given the diversity of affected services,” said Maud Sacquet, CCIA Europe Senior Policy Manager.
Weaponizing fake news
The revelations that political consultancy Cambridge Analytica – which worked on U.S. President Donald Trump’s campaign – improperly accessed the data of up to 87 million Facebook users has further rocked public trust in social media.
“There are serious doubts about whether platforms are sufficiently protecting their users against unauthorized use of their personal data by third parties, as exemplified by the recent Facebook/Cambridge Analytica revelations,” the Commission wrote.
Facebook has stepped up fact-checking in its fight against fake news and is trying to make it uneconomical for people to post such content by lowering its ranking and making it less visible. The world’s largest social network is also working on giving its users more context and background about the content they read on the platform.
“The weaponization of online fake news and disinformation poses a serious security threat to our societies,” said Julian King, EU Commissioner for security. “The subversion of trusted channels to peddle pernicious and divisive content requires a clear-eyed response based on increased transparency, traceability and accountability.”
Campaign group European Digital Rights warned that the Commission ought not to rush into taking binding measures over fake news which could have an effect on the freedom of speech.
King rejected any suggestion that the proposal would lead to censorship or a crackdown on satire or partisan news.
“It’s a million miles away from censorship,” King told a news conference. “It’s not targeting partisan journalism, freedom of speech, freedom to disagree, freedom to be, in some cases, a bit disagreeable.”
Commission Vice-President Andrus Ansip said there had been some debate internally over whether to explicitly mention Russia in the fake news strategy.
“Some people say that we don’t want to name just one name. And other people say that ‘add some other countries also and then we will put them all on our list’, but unfortunately nobody is able to name those others,” the former Estonian prime minister said.
…
Facebook’s Rise in Profits, Users Shows Resilience
Facebook Inc. shares rose Wednesday after the social network reported a surprisingly strong 63 percent rise in profit and an increase in users, with no sign that business was hurt by a scandal over the mishandling of personal data.
After easily beating Wall Street expectations, shares traded up 7.1 percent after the bell at $171, paring a month-long decline that began with Facebook’s disclosure in March that consultancy Cambridge Analytica had harvested data belonging to millions of users.
The Cambridge Analytica scandal, affecting up to 87 million users and prompting several apologies from Chief Executive Mark Zuckerberg, generated calls for regulation and for users to leave the social network, but there was no indication advertisers immediately changed their spending.
“Everybody keeps talking about how bad things are for Facebook, but this earnings report to me is very positive, and reiterates that Facebook is fine, and they’ll get through this,” said Daniel Morgan, senior portfolio manager at Synovus Trust Company. His firm holds about 73,000 shares in Facebook.
Facebook’s quarterly profit beat analysts’ estimates, as a 49 percent jump in quarterly revenue outpaced a 39 percent rise in expenses from a year earlier. The mobile ad business grew on a push to add more video content.
Facebook said monthly active users in the first quarter rose to 2.2 billion, up 13 percent from a year earlier and matching expectations, according to Thomson Reuters.
The company reversed last quarter’s decline in the number of daily active users in the United States and Canada, saying it had 185 million users there, up from 184 million in the fourth quarter.
Resilient business model
The results are a bright spot for the world’s largest social network amid months of negative headlines about the company’s handling of personal information, its role in elections and its fueling of violence in developing countries.
Facebook, which generates revenue primarily by selling advertising personalized to its users, has demonstrated for several quarters how resilient its business model can be as long as users keep coming back to scroll through its News Feed and watch its videos.
It is spending to ensure users are not scared away by scandals. Chief Financial Officer David Wehner told analysts on a call that expenses this year would grow between 50 percent and 60 percent, up from a prior range of 45 percent to 60 percent.
Spending on security
Much of Facebook’s ramp-up in spending is for safety and security, Wehner said. The category includes efforts to root out fake accounts, scrub hate speech and take down violent videos.
Facebook said it ended the first quarter with 27,742 employees, up 48 percent from a year earlier.
“So long as profits continue to grow at a rapid rate, investors will accept that higher spending to ensure privacy is warranted,” Wedbush Securities analyst Michael Pachter said.
It has been nearly two years since Facebook shares rose 7 percent or more during a trading day. They rose 7.2 percent on April 28, 2016, the day after another first-quarter earnings report.
Net income attributable to Facebook shareholders rose in the first quarter to $4.99 billion, or $1.69 per share, from $3.06 billion, or $1.04 per share, a year earlier.
Analysts on average were expecting a profit of $1.35 per share, according to Thomson Reuters.
Total revenue was $11.97 billion, above the analyst estimate of $11.41 billion.
Some details secret
The company declined to provide some details sought by analysts. It has not shared the revenue generated by Instagram, the photo-sharing app it owns, and it declined to provide details about time spent on Facebook. Facebook also owns the popular smartphone apps Messenger and WhatsApp.
Tighter regulation could make Facebook’s ads less lucrative by reducing the kinds of data it can use to personalize and target ads to users, although Facebook’s size means it could also be well positioned to cope with regulations.
Facebook and Alphabet Inc’s Google together dominate the internet ad business worldwide. Facebook is expected to take 18 percent of global digital ad revenue this year, compared with Google’s 31 percent, according to research firm eMarketer.
The company said it was increasing the amount of money authorized to repurchase shares by an additional $9 billion. It had initially authorized repurchases up to $6 billion.
…
Facebook’s Rise in Profits, Users Shows Resilience
Facebook Inc. shares rose Wednesday after the social network reported a surprisingly strong 63 percent rise in profit and an increase in users, with no sign that business was hurt by a scandal over the mishandling of personal data.
After easily beating Wall Street expectations, shares traded up 7.1 percent after the bell at $171, paring a month-long decline that began with Facebook’s disclosure in March that consultancy Cambridge Analytica had harvested data belonging to millions of users.
The Cambridge Analytica scandal, affecting up to 87 million users and prompting several apologies from Chief Executive Mark Zuckerberg, generated calls for regulation and for users to leave the social network, but there was no indication advertisers immediately changed their spending.
“Everybody keeps talking about how bad things are for Facebook, but this earnings report to me is very positive, and reiterates that Facebook is fine, and they’ll get through this,” said Daniel Morgan, senior portfolio manager at Synovus Trust Company. His firm holds about 73,000 shares in Facebook.
Facebook’s quarterly profit beat analysts’ estimates, as a 49 percent jump in quarterly revenue outpaced a 39 percent rise in expenses from a year earlier. The mobile ad business grew on a push to add more video content.
Facebook said monthly active users in the first quarter rose to 2.2 billion, up 13 percent from a year earlier and matching expectations, according to Thomson Reuters.
The company reversed last quarter’s decline in the number of daily active users in the United States and Canada, saying it had 185 million users there, up from 184 million in the fourth quarter.
Resilient business model
The results are a bright spot for the world’s largest social network amid months of negative headlines about the company’s handling of personal information, its role in elections and its fueling of violence in developing countries.
Facebook, which generates revenue primarily by selling advertising personalized to its users, has demonstrated for several quarters how resilient its business model can be as long as users keep coming back to scroll through its News Feed and watch its videos.
It is spending to ensure users are not scared away by scandals. Chief Financial Officer David Wehner told analysts on a call that expenses this year would grow between 50 percent and 60 percent, up from a prior range of 45 percent to 60 percent.
Spending on security
Much of Facebook’s ramp-up in spending is for safety and security, Wehner said. The category includes efforts to root out fake accounts, scrub hate speech and take down violent videos.
Facebook said it ended the first quarter with 27,742 employees, up 48 percent from a year earlier.
“So long as profits continue to grow at a rapid rate, investors will accept that higher spending to ensure privacy is warranted,” Wedbush Securities analyst Michael Pachter said.
It has been nearly two years since Facebook shares rose 7 percent or more during a trading day. They rose 7.2 percent on April 28, 2016, the day after another first-quarter earnings report.
Net income attributable to Facebook shareholders rose in the first quarter to $4.99 billion, or $1.69 per share, from $3.06 billion, or $1.04 per share, a year earlier.
Analysts on average were expecting a profit of $1.35 per share, according to Thomson Reuters.
Total revenue was $11.97 billion, above the analyst estimate of $11.41 billion.
Some details secret
The company declined to provide some details sought by analysts. It has not shared the revenue generated by Instagram, the photo-sharing app it owns, and it declined to provide details about time spent on Facebook. Facebook also owns the popular smartphone apps Messenger and WhatsApp.
Tighter regulation could make Facebook’s ads less lucrative by reducing the kinds of data it can use to personalize and target ads to users, although Facebook’s size means it could also be well positioned to cope with regulations.
Facebook and Alphabet Inc’s Google together dominate the internet ad business worldwide. Facebook is expected to take 18 percent of global digital ad revenue this year, compared with Google’s 31 percent, according to research firm eMarketer.
The company said it was increasing the amount of money authorized to repurchase shares by an additional $9 billion. It had initially authorized repurchases up to $6 billion.
…
YouTube Overhauls Kids’ App
YouTube is overhauling its kid-focused video app to give parents the option of letting humans, not computer algorithms, select what shows their children can watch.
The updates that begin rolling out April 26, 2018, are a response to complaints that the YouTube Kids app has repeatedly failed to filter out disturbing content.
Google-owned YouTube launched the toddler-oriented app in 2015. It has described it as a “safer” experience than the regular YouTube video-sharing service for finding “Peppa Pig” episodes or watching user-generated videos of people unboxing toys, teaching guitar lessons or experimenting with science.
Failure of screening system
In order to meet U.S. child privacy rules, Google says it bans kids under 13 from using its core video service. But its official terms of agreement are largely ignored by tens of millions of children and their families who don’t bother downloading the under-13 app.
Both the grown-up video service and the YouTube Kids app have been criticized by child advocates for their commercialism and for the failures of a screening system that relies on artificial intelligence. The app is engineered to automatically exclude content that’s not appropriate for kids, and recommend videos based on what children have watched before. That hasn’t always worked to parents’ liking — especially when videos with profanity, violence or sexual themes slip through the filters.
Updates give parents option
The updates allow parents to switch off the automated system and choose a contained selection of children’s programming such as Sesame Street and PBS Kids. But the automated system remains the default.
“For parents who like the current version of YouTube Kids and want a wider selection of content, it’s still available,” said James Beser, the app’s product director, in a blog post Wednesday. “While no system is perfect, we continue to fine-tune, rigorously test and improve our filters for this more-open version of our app.”
Beser also encouraged parents to block videos and flag them for review if they don’t think they should be on the app. But the practice of addressing problem videos after children have already been exposed to them has bothered child advocates who want the more controlled option to be the default.
Cleaner, safer kids’ app
“Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this,” said Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood. “Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids.”
He said Google should aim to build an even cleaner and safer kids’ app, then pull all the kid-oriented content off the regular YouTube — where most kids are going — and onto that app.
Golin’s group recently asked the Federal Trade Commission to investigate whether YouTube’s data collection and advertising practices violate federal child privacy rules. He said advocates plan to meet with FTC officials next week.
…
YouTube Overhauls Kids’ App
YouTube is overhauling its kid-focused video app to give parents the option of letting humans, not computer algorithms, select what shows their children can watch.
The updates that begin rolling out April 26, 2018, are a response to complaints that the YouTube Kids app has repeatedly failed to filter out disturbing content.
Google-owned YouTube launched the toddler-oriented app in 2015. It has described it as a “safer” experience than the regular YouTube video-sharing service for finding “Peppa Pig” episodes or watching user-generated videos of people unboxing toys, teaching guitar lessons or experimenting with science.
Failure of screening system
In order to meet U.S. child privacy rules, Google says it bans kids under 13 from using its core video service. But its official terms of agreement are largely ignored by tens of millions of children and their families who don’t bother downloading the under-13 app.
Both the grown-up video service and the YouTube Kids app have been criticized by child advocates for their commercialism and for the failures of a screening system that relies on artificial intelligence. The app is engineered to automatically exclude content that’s not appropriate for kids, and recommend videos based on what children have watched before. That hasn’t always worked to parents’ liking — especially when videos with profanity, violence or sexual themes slip through the filters.
Updates give parents option
The updates allow parents to switch off the automated system and choose a contained selection of children’s programming such as Sesame Street and PBS Kids. But the automated system remains the default.
“For parents who like the current version of YouTube Kids and want a wider selection of content, it’s still available,” said James Beser, the app’s product director, in a blog post Wednesday. “While no system is perfect, we continue to fine-tune, rigorously test and improve our filters for this more-open version of our app.”
Beser also encouraged parents to block videos and flag them for review if they don’t think they should be on the app. But the practice of addressing problem videos after children have already been exposed to them has bothered child advocates who want the more controlled option to be the default.
Cleaner, safer kids’ app
“Anything that gives parents the ability to select programming that has been vetted in some fashion by people is an improvement, but I also think not every parent is going to do this,” said Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood. “Giving parents more control doesn’t absolve YouTube of the responsibility of keeping the bad content out of YouTube Kids.”
He said Google should aim to build an even cleaner and safer kids’ app, then pull all the kid-oriented content off the regular YouTube — where most kids are going — and onto that app.
Golin’s group recently asked the Federal Trade Commission to investigate whether YouTube’s data collection and advertising practices violate federal child privacy rules. He said advocates plan to meet with FTC officials next week.
…
Will Robot Baristas Replace Traditional Cafes?
There has been a long tradition of making and drinking coffee across cultures and continents. Now, a tech company in Austin, Texas, is adding to this tradition by creating robot baristas to make the coffee-drinking experience more convenient. For a price similar to a cup of Starbucks coffee, a robot can now make it, too.
“I think it’s super cool. It’s so innovative. I’ve never seen anything like this before. It’s really fun to watch your coffee getting made by a robot,” said Wendy Cummings, who just received her drink made by the robot barista.
Created by the company Briggo, the barista is a robotic arm that makes coffee inside a kiosk that can brew a fresh cup of designer coffee at any time of day.
This robot barista also solves a global problem, said Charles Studor, Briggo’s founder.
“Coffee is ubiquitous, and this problem that we’re solving is common around the world. The problem is very high-quality coffee that’s convenient, that’s consistent, done just the way you like and that is very efficient in the use of the beans and the raw materials,” Studor said.
Customers can download the Briggo app on their mobile phones and customize their order. When their coffee is made, they can go to the robot barista and pick up their order.
“It’s perfect to just not wait in lines, just getting there picking your coffee, and you’re good to go for the day,” said Astrid Chacon, who just tried a robot-made coffee.
Plans for social impact
“I started the company really thinking about the way we consume quality products in the West,” Studor said. “We’re often very wasteful, and we don’t really understand what it’s taken to get those quality beans, in this case, beans to our mouth, essentially. And we want to be able to connect at the end of the day, not just solve the problem of quality coffee — convenient, but also back to origin.”
Studor’s long-term goal is to create social impact by connecting the consumer with the coffee grower. This kind of relationship can be possible through the internet and social media, Studor said.
“Maybe the farmers have some issues. Maybe we can do programs where we connect you and say, ‘Help with a water project,’ or ‘Help with a motor blower that’s gone down in a small cooperative.’ So, how do we use the technology of this century to connect people in lots of different ways? And coffee is a common ground that everyone can relate to,” Studor said.
Robots vs. humans
While big-name coffee brands such as Starbucks dedicate a portion of their business to bettering the farming communities that supply their products, will the robot barista with a social cause threaten more traditional cafes?
“It’s going to take a little bit of time, but I’m sure we’re going to be having this instead of Starbucks,” Chacon said.
Studor believes robot and human baristas can serve different coffee needs.
“It’s a big market. There are specialty coffee shops where there are high-quality trained baristas that I don’t think we’ll ever replace. I mean, there is a place and time for those. But there’s a lot of places around. Think about a hospital in the middle of the night. Where is the quality coffee there? Where is it at 5:30 in the morning at the airport? And so, we want to get that quality experience in all those places that are really underserved,” Studor said.
Briggo’s robot baristas are currently in the Austin area. The company plans to put a robot kiosk at Austin’s airport, as well as expanding to corporate campuses in other major Texas cities, and then to the U.S. East and West coasts and beyond.
…
Will Robot Baristas Replace Traditional Cafes?
There has been a long tradition of making and drinking coffee across cultures and continents. Now, a tech company in Austin, Texas, is adding to this tradition by creating robot baristas to make the coffee-drinking experience more convenient. For a price similar to a cup of Starbucks coffee, a robot can now make it, too.
“I think it’s super cool. It’s so innovative. I’ve never seen anything like this before. It’s really fun to watch your coffee getting made by a robot,” said Wendy Cummings, who just received her drink made by the robot barista.
Created by the company Briggo, the barista is a robotic arm that makes coffee inside a kiosk that can brew a fresh cup of designer coffee at any time of day.
This robot barista also solves a global problem, said Charles Studor, Briggo’s founder.
“Coffee is ubiquitous, and this problem that we’re solving is common around the world. The problem is very high-quality coffee that’s convenient, that’s consistent, done just the way you like and that is very efficient in the use of the beans and the raw materials,” Studor said.
Customers can download the Briggo app on their mobile phones and customize their order. When their coffee is made, they can go to the robot barista and pick up their order.
“It’s perfect to just not wait in lines, just getting there picking your coffee, and you’re good to go for the day,” said Astrid Chacon, who just tried a robot-made coffee.
Plans for social impact
“I started the company really thinking about the way we consume quality products in the West,” Studor said. “We’re often very wasteful, and we don’t really understand what it’s taken to get those quality beans, in this case, beans to our mouth, essentially. And we want to be able to connect at the end of the day, not just solve the problem of quality coffee — convenient, but also back to origin.”
Studor’s long-term goal is to create social impact by connecting the consumer with the coffee grower. This kind of relationship can be possible through the internet and social media, Studor said.
“Maybe the farmers have some issues. Maybe we can do programs where we connect you and say, ‘Help with a water project,’ or ‘Help with a motor blower that’s gone down in a small cooperative.’ So, how do we use the technology of this century to connect people in lots of different ways? And coffee is a common ground that everyone can relate to,” Studor said.
Robots vs. humans
While big-name coffee brands such as Starbucks dedicate a portion of their business to bettering the farming communities that supply their products, will the robot barista with a social cause threaten more traditional cafes?
“It’s going to take a little bit of time, but I’m sure we’re going to be having this instead of Starbucks,” Chacon said.
Studor believes robot and human baristas can serve different coffee needs.
“It’s a big market. There are specialty coffee shops where there are high-quality trained baristas that I don’t think we’ll ever replace. I mean, there is a place and time for those. But there’s a lot of places around. Think about a hospital in the middle of the night. Where is the quality coffee there? Where is it at 5:30 in the morning at the airport? And so, we want to get that quality experience in all those places that are really underserved,” Studor said.
Briggo’s robot baristas are currently in the Austin area. The company plans to put a robot kiosk at Austin’s airport, as well as expanding to corporate campuses in other major Texas cities, and then to the U.S. East and West coasts and beyond.
…
Will Robot Baristas Replace Traditional Cafes?
There has been a long tradition of making and drinking coffee across cultures and continents. Now, a tech company in Austin is adding to this tradition by creating robot baristas to make the coffee-drinking experience more convenient. For a similar price of a cup of Starbucks designer coffee, a robot can now make it, too. VOA’s Elizabeth Lee finds out whether robots will replace traditional baristas.
…
Will Robot Baristas Replace Traditional Cafes?
There has been a long tradition of making and drinking coffee across cultures and continents. Now, a tech company in Austin is adding to this tradition by creating robot baristas to make the coffee-drinking experience more convenient. For a similar price of a cup of Starbucks designer coffee, a robot can now make it, too. VOA’s Elizabeth Lee finds out whether robots will replace traditional baristas.
…
Beijing Auto Show Highlights E-cars Designed for China
Volkswagen and Nissan have unveiled electric cars designed for China at a Beijing auto show that highlights the growing importance of Chinese buyers for a technology seen as a key part of the global industry’s future.
General Motors displayed five all-electric models Wednesday including a concept Buick SUV it says can go 600 kilometers (375 miles) on one charge. Ford and other brands showed off some of the dozens of electric SUVs, sedans and other models they say are planned for China.
Auto China 2018, the industry’s biggest sales event this year, is overshadowed by mounting trade tensions between Beijing and U.S. President Donald Trump, who has threatened to hike tariffs on Chinese goods including automobiles in a dispute over technology policy.
The impact on automakers should be small, according to industry analysts, because exports amount to only a few thousand vehicles a year. Those include a GM SUV, the Envision, and Volvo Cars sedans made in China for export to the United States.
China accounted for half of last year’s global electric car sales, boosted by subsidies and other prodding from communist leaders who want to make their country a center for the emerging technology.
“The Chinese market is key for the international auto industry and it is key to our success,” VW CEO Herbert Diess said on Tuesday.
Volkswagen unveiled the E20X, an SUV that is the first model for SOL, an electric brand launched by the German automaker with a Chinese partner. The E20X, promising a 300-kilometer (185-mile) range on one charge, is aimed at the Chinese market’s bargain-priced tiers, where demand is strongest.
GM, Ford, Daimler AG’s Mercedes unit and other automakers also have announced ventures with local partners to develop models for China that deliver more range at lower prices.
On Wednesday, Nissan Motor Co. presented its Sylphy Zero Emission, which it said can go 338 kilometers (210 miles) on a charge. The Sylphy is based on Nissan’s Leaf, a version of which is available in China but has sold poorly due to its relatively high price.
Automakers say they expect electrics to account for 35 to over 50 percent of their China sales by 2025.
First-quarter sales of electrics and gasoline-electric hybrids rose 154 percent over a year earlier to 143,000 units, according to the China Association of Automobile Manufacturers. That compares with sales of just under 200,000 for all of last year in the United States, the No. 2 market.
That trend has been propelled by the ruling Communist Party’s support for the technology. The party is shifting the financial burden to automakers with sales quotas that take effect next year and require them to earn credits by selling electrics or buy them from competitors.
That increases pressure to transform electrics into a mainstream product that competes on price and features.
Automakers also displayed dozens of gasoline-powered models from compact sedans to luxurious SUVs. Their popularity is paying for development of electrics, which aren’t expected to become profitable for most producers until sometime in the next decade.
China’s total sales of SUVs, sedans and minivans reached 24.7 million units last year, compared with 17.2 million for the United States.
SUVs are the industry’s cash cow. First-quarter sales rose 11.3 percent over a year earlier to 2.6 million, or almost 45 percent of total auto sales, according to the China Association of Automobile Manufacturers.
On Wednesday, Ford displayed its Mondeo Energi plug-in hybrid, its first electric model for China, which went on sale in March. Plans call for Ford and its luxury unit, Lincoln, to release 15 new electrified vehicles by 2025.
GM plans to launch 10 electrics or hybrids in China from through 2020.
VW is due to launch 15 electrics and hybrids in the next two to three years as part of a 10 billion euro ($12 billion) development plan announced in November.
Nissan says it will roll out 20 electrified models in China over the next five years.
New but fast-growing Chinese auto trail global rivals in traditional gasoline technology but industry analysts say the top Chinese brands are catching up in electrics, a market with no entrenched leaders.
BYD Auto, the biggest global electric brand by number sold, debuted two hybrid SUVs and an electric concept car.
The company, which manufactures electric buses at a California factory and exports battery-powered taxis to Europe, also displayed nine other hybrid and plug-in electric models.
Chery Automobile Co. showed a lineup that included two electric sedans, an SUV and a hatchback, all promising 250 to 400 kilometers (150 to 250 miles) on a charge. They include futuristic features such as internet-linked navigation and smartphone-style dashboard displays.
“Our focus is not just an EV that runs. It is excellent performance,” Chery CEO Chen Anning said in an interview ahead of the show.
Electrics are likely to play a leading role as Chery develops plans announced last year to expand to Western Europe, said Chen. He said the company has yet to decide on a timeline.
Chery was China’s biggest auto exporter last year, selling 108,000 gasoline-powered vehicles abroad, though mostly in developing markets such as Russia and Egypt.
“We do have a clear intention to bring an EV product as one of our initial offerings” in Europe, Chen said.
…
Flying Taxi Start-Up Hires Designer Behind Modern Mini, Fiat 500
Lilium, a German start-up with Silicon Valley-scale ambitions to put electric “flying taxis” in the air next decade, has hired Frank Stephenson, the designer behind iconic car brands including the modern Mini, Fiat 500 and McLaren P1.
Lilium is developing a lightweight aircraft powered by 36 electric jet engines mounted on its wings. It aims to travel at speeds of up to 300 kilometers (186 miles) per hour, with a range of 300 km on a single charge, the firm has said.
Founded in 2015 by four Munich Technical University students, the Bavarian firm has set out plans to demonstrate a fully functional vertical take-off electric jet by next year, with plans to begin online booking of commuter flights by 2025.
It is one of a number of companies, from Chinese automaker Geely to U.S. ride-sharing firm Uber, looking to tap advances in drone technology, high-performance materials and automated driving to turn aerial driving – long a staple of science fiction movies like “Blade Runner” – into reality.
Stephenson, 58, who holds American and British citizenship, will join the aviation start-up in May. He lives west of London and will commute weekly to Lilium’s offices outside of Munich.
His job is to design a plane on the outside and a car inside.
Famous for a string of hits at BMW, Mini, Ferrari, Maserati, Fiat, Alfa Romeo and McLaren, Stephenson will lead all aspects of Lilium design, including the interior and exterior of its jets, the service’s landing pads and even its departure lounges.
“With Lilium, we don’t have to base the jet on anything that has been done before,” Stephenson told Reuters in an interview.
“What’s so incredibly exciting about this is we’re not talking about modifying a car to take to the skies, and we are not talking about modifying a helicopter to work in a better way.”
Stephenson recalled working at Ferrari a dozen years ago and thinking it was the greatest job a grown-up kid could ever want.
But the limits of working at such a storied carmaker dawned on him: “I always had to make a car that looked like a Ferrari.”
His move to McLaren, where he worked from 2008 until 2017, freed him to design a new look and design language from scratch: “That was as good as it gets for a designer,” he said.
Lilium is developing a five-seat flying electric vehicle for commuters after tests in 2017 of a two-seat jet capable of a mid-air transition from hover mode, like drones, into wing-borne flight, like conventional aircraft.
Combining these two features is what separates Lilium from rival start-ups working on so-called flying cars or taxis that rely on drone or helicopter-like technologies, such as German rival Volocopter or European aerospace giant Airbus.
“If the competitors come out there with their hovercraft or drones or whatever type of vehicles, they’ll have their own distinctive look,” Stephenson said.
“Let the other guys do whatever they want. The last thing I want to do is anything that has been done before.”
The jet, with power consumption per kilometer comparable to an electric car, could offer passenger flights at prices taxis now charge but at speeds five times faster, Lilium has said.
Nonetheless, flying cars face many hurdles, including convincing regulators and the public that their products can be used safely. Governments are still grappling with regulations for drones and driverless cars.
Lilium has raised more than $101 million in early-stage funding from backers including an arm of China’s Tencent and Atomico and Obvious Ventures, the venture firms, respectively, of the co-founders of Skype and Twitter.
Flying Taxi Start-Up Hires Designer Behind Modern Mini, Fiat 500
Lilium, a German start-up with Silicon Valley-scale ambitions to put electric “flying taxis” in the air next decade, has hired Frank Stephenson, the designer behind iconic car brands including the modern Mini, Fiat 500 and McLaren P1.
Lilium is developing a lightweight aircraft powered by 36 electric jet engines mounted on its wings. It aims to travel at speeds of up to 300 kilometers (186 miles) per hour, with a range of 300 km on a single charge, the firm has said.
Founded in 2015 by four Munich Technical University students, the Bavarian firm has set out plans to demonstrate a fully functional vertical take-off electric jet by next year, with plans to begin online booking of commuter flights by 2025.
It is one of a number of companies, from Chinese automaker Geely to U.S. ride-sharing firm Uber, looking to tap advances in drone technology, high-performance materials and automated driving to turn aerial driving – long a staple of science fiction movies like “Blade Runner” – into reality.
Stephenson, 58, who holds American and British citizenship, will join the aviation start-up in May. He lives west of London and will commute weekly to Lilium’s offices outside of Munich.
His job is to design a plane on the outside and a car inside.
Famous for a string of hits at BMW, Mini, Ferrari, Maserati, Fiat, Alfa Romeo and McLaren, Stephenson will lead all aspects of Lilium design, including the interior and exterior of its jets, the service’s landing pads and even its departure lounges.
“With Lilium, we don’t have to base the jet on anything that has been done before,” Stephenson told Reuters in an interview.
“What’s so incredibly exciting about this is we’re not talking about modifying a car to take to the skies, and we are not talking about modifying a helicopter to work in a better way.”
Stephenson recalled working at Ferrari a dozen years ago and thinking it was the greatest job a grown-up kid could ever want.
But the limits of working at such a storied carmaker dawned on him: “I always had to make a car that looked like a Ferrari.”
His move to McLaren, where he worked from 2008 until 2017, freed him to design a new look and design language from scratch: “That was as good as it gets for a designer,” he said.
Lilium is developing a five-seat flying electric vehicle for commuters after tests in 2017 of a two-seat jet capable of a mid-air transition from hover mode, like drones, into wing-borne flight, like conventional aircraft.
Combining these two features is what separates Lilium from rival start-ups working on so-called flying cars or taxis that rely on drone or helicopter-like technologies, such as German rival Volocopter or European aerospace giant Airbus.
“If the competitors come out there with their hovercraft or drones or whatever type of vehicles, they’ll have their own distinctive look,” Stephenson said.
“Let the other guys do whatever they want. The last thing I want to do is anything that has been done before.”
The jet, with power consumption per kilometer comparable to an electric car, could offer passenger flights at prices taxis now charge but at speeds five times faster, Lilium has said.
Nonetheless, flying cars face many hurdles, including convincing regulators and the public that their products can be used safely. Governments are still grappling with regulations for drones and driverless cars.
Lilium has raised more than $101 million in early-stage funding from backers including an arm of China’s Tencent and Atomico and Obvious Ventures, the venture firms, respectively, of the co-founders of Skype and Twitter.
Facebook Rules at a Glance: What’s Banned, Exactly?
Facebook has revealed for the first time just what, exactly, is banned on its service in a new Community Standards document released on Tuesday. It’s an updated version of the internal rules the company has used to determine what’s allowed and what isn’t, down to granular details such as what, exactly, counts as a “credible threat” of violence. The previous public-facing version gave a broad-strokes outline of the rules, but the specifics were shrouded in secrecy for most of Facebook’s 2.2 billion users.
Not anymore. Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules – it has just made them public.
Credible violence
Is there a real-world threat? Facebook looks for “credible statements of intent to commit violence against any person, groups of people, or place (city or smaller).” Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as “heads-of-state, witnesses and confidential informants, activists, and journalists.”
Also banned: instructions on “on how to make or use weapons if the goal is to injure or kill people,” unless there is “clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country’s military, commercial video games, or news coverage).”
Hate speech
“We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status,” Facebook says. As to what counts as a direct attack, the company says it’s any “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to “exclude or segregate” a person our group based on the protected characteristics. Facebook does note that it does “allow criticism of immigration policies and arguments for restricting those policies.”
Graphic violence
Images of violence against “real people or animals” with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or “indicating the poster is sharing footage for sensational viewing pleasure” are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.
Child sexual exploitation
“We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images,” Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes “uncovered female nipples for children older than toddler-age.”
Adult nudity and sexual activity
“We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring,” Facebook says. That said, the company says it “defaults” to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art – such as drawings, paintings or sculptures – is an exception.
Facebook Rules at a Glance: What’s Banned, Exactly?
Facebook has revealed for the first time just what, exactly, is banned on its service in a new Community Standards document released on Tuesday. It’s an updated version of the internal rules the company has used to determine what’s allowed and what isn’t, down to granular details such as what, exactly, counts as a “credible threat” of violence. The previous public-facing version gave a broad-strokes outline of the rules, but the specifics were shrouded in secrecy for most of Facebook’s 2.2 billion users.
Not anymore. Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules – it has just made them public.
Credible violence
Is there a real-world threat? Facebook looks for “credible statements of intent to commit violence against any person, groups of people, or place (city or smaller).” Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as “heads-of-state, witnesses and confidential informants, activists, and journalists.”
Also banned: instructions on “on how to make or use weapons if the goal is to injure or kill people,” unless there is “clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country’s military, commercial video games, or news coverage).”
Hate speech
“We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status,” Facebook says. As to what counts as a direct attack, the company says it’s any “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to “exclude or segregate” a person our group based on the protected characteristics. Facebook does note that it does “allow criticism of immigration policies and arguments for restricting those policies.”
Graphic violence
Images of violence against “real people or animals” with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or “indicating the poster is sharing footage for sensational viewing pleasure” are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.
Child sexual exploitation
“We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images,” Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes “uncovered female nipples for children older than toddler-age.”
Adult nudity and sexual activity
“We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring,” Facebook says. That said, the company says it “defaults” to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art – such as drawings, paintings or sculptures – is an exception.
Cambridge Analytica Fights Back on Data Scandal
Cambridge Analytica unleashed its counterattack against claims that it misused data from millions of Facebook accounts, saying Tuesday it is the victim of misunderstandings and inaccurate reporting that portrays the company as the evil villain in a James Bond movie.
Clarence Mitchell, a high-profile publicist recently hired to represent the company, held Cambridge Analytica’s first news conference since allegations surfaced that the Facebook data helped Donald Trump win the 2016 presidential election. Christopher Wylie, a former employee of Cambridge Analytica’s parent, also claims that the company has links to the successful campaign to take Britain out of the European Union.
“The company has been portrayed in some quarters as almost some Bond villain,” Mitchell said. “Cambridge Analytica is no Bond villain.”
Cambridge Analytica didn’t use any of the Facebook data in the work it did for Trump’s campaign and it never did any work on the Brexit campaign, Mitchell said. Furthermore, he said, the data was collected by another company that was contractually obligated to follow data protection rules and the information was deleted as soon as Facebook raised concerns.
Mitchell insists the company has not broken any laws, but acknowledged it had commissioned an independent investigation is being conducted. He insisted that the company had been victimized by “wild speculation based on misinformation, misunderstanding, or in some cases, frankly, an overtly political position.”
The comments come weeks after the scandal engulfed both the consultancy and Facebook, which has been embroiled in scandal since revelations that Cambridge Analytica misused personal information from as many as 87 million Facebook accounts. Facebook’s CEO Mark Zuckerberg testified before the U.S. congressional committees and at one point the company lost some $50 billion in value for its shareholders.
Details on the scandal continued to trickle out. On Tuesday, a Cambridge University academic said the suspended CEO of Cambridge Analytica lied to British lawmakers investigating fake news.
Academic Aleksandr Kogan’s company, Global Science Research, developed a Facebook app that vacuumed up data from people who signed up to use the app as well as information from their Facebook friends, even if those friends hadn’t agreed to share their data.
Cambridge Analytica allegedly used the data to profile U.S. voters and target them with ads during the 2016 election to help elect Donald Trump. It denies the charge.
Kogan appeared before the House of Commons’ media committee Tuesday and was asked whether Cambridge Analytica’s suspended CEO, Alexander Nix, told the truth when he testified that none of the company’s data came from Global Science Research.
“That’s a fabrication,” Kogan told committee Chairman Damian Collins. Nix could not immediately be reached for comment.
Kogan also cast doubt on many of Wylie’s allegations, which have triggered a global debate about internet privacy protections. Wylie repeated his claims in a series of media interviews as well as an appearance before the committee.
Wylie worked for SCL Group Ltd. in 2013 and 2014.
“Mr. Wylie has invented many things,” Kogan said, calling him “duplicitous.”
No matter what, though, Kogan insisted in his testimony that the data would not be that useful to election consultants. The idea was seized upon by Mitchell, who also denied that the company had worked on the effort to have Britain leave the EU.
Mitchell said that the idea that political consultancies can use data alone to sway votes is “frankly insulting to the electorates. Data science in modern campaigning helps those campaigns, but it is still and always will be the candidates who win the races.”
…
China Tech Firms Pledge to End Sexist Job Ads
Chinese tech firms pledged on Monday to tackle gender bias in recruitment after a rights group said they routinely favored male candidates, luring applicants with the promise of working with “beautiful girls” in job advertisements.
A Human Rights Watch (HRW) report found that major technology companies including Alibaba, Baidu and Tencent had widely used “gender discriminatory job advertisements,” which said men were preferred or specifically barred women applicants.
Some ads promised candidates they would work with “beautiful girls” and “goddesses,” HRW said in a report based on an analysis of 36,000 job posts between 2013 and 2018.
Tencent, which runs China’s most popular messenger app WeChat, apologized for the ads after the HRW report was published on Monday.
“We are sorry they occurred and we will take swift action to ensure they do not happen again,” a Tencent spokesman told the Thomson Reuters Foundation.
E-commerce giant Alibaba, founded by billionaire Jack Ma, vowed to conduct stricter reviews to ensure its job ads followed workplace equality principles, but refused to say whether the ads singled out in the report were still being used.
“Our track record of not just hiring but promoting women in leadership positions speaks for itself,” said a spokeswoman.
Baidu, the Chinese equivalent of search engine Google, meanwhile said the postings were “isolated instances.”
HRW urged Chinese authorities to take action to end discriminatory hiring practices.
Its report also found nearly one in five ads for Chinese government jobs this year were “men only” or “men preferred.”
“Sexist job ads pander to the antiquated stereotypes that persist within Chinese companies,” HRW China director Sophie Richardson said in a statement.
“These companies pride themselves on being forces of modernity and progress, yet they fall back on such recruitment strategies, which shows how deeply entrenched discrimination against women remains in China,” she added.
China was ranked 100 out of 144 countries in the World Economic Forum’s 2017 Gender Gap Report, after it said the country’s progress towards gender parity has slowed.
…
China Tech Firms Pledge to End Sexist Job Ads
Chinese tech firms pledged on Monday to tackle gender bias in recruitment after a rights group said they routinely favored male candidates, luring applicants with the promise of working with “beautiful girls” in job advertisements.
A Human Rights Watch (HRW) report found that major technology companies including Alibaba, Baidu and Tencent had widely used “gender discriminatory job advertisements,” which said men were preferred or specifically barred women applicants.
Some ads promised candidates they would work with “beautiful girls” and “goddesses,” HRW said in a report based on an analysis of 36,000 job posts between 2013 and 2018.
Tencent, which runs China’s most popular messenger app WeChat, apologized for the ads after the HRW report was published on Monday.
“We are sorry they occurred and we will take swift action to ensure they do not happen again,” a Tencent spokesman told the Thomson Reuters Foundation.
E-commerce giant Alibaba, founded by billionaire Jack Ma, vowed to conduct stricter reviews to ensure its job ads followed workplace equality principles, but refused to say whether the ads singled out in the report were still being used.
“Our track record of not just hiring but promoting women in leadership positions speaks for itself,” said a spokeswoman.
Baidu, the Chinese equivalent of search engine Google, meanwhile said the postings were “isolated instances.”
HRW urged Chinese authorities to take action to end discriminatory hiring practices.
Its report also found nearly one in five ads for Chinese government jobs this year were “men only” or “men preferred.”
“Sexist job ads pander to the antiquated stereotypes that persist within Chinese companies,” HRW China director Sophie Richardson said in a statement.
“These companies pride themselves on being forces of modernity and progress, yet they fall back on such recruitment strategies, which shows how deeply entrenched discrimination against women remains in China,” she added.
China was ranked 100 out of 144 countries in the World Economic Forum’s 2017 Gender Gap Report, after it said the country’s progress towards gender parity has slowed.
…
Facebook Says It is Taking Down More Material About ISIS, al-Qaida
Facebook said on Monday that it removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaida in the first three months of the year, or about double the amount from the previous quarter.
Facebook, the world’s largest social media network, also published its internal definition of “terrorism” for the first time, as part of an effort to be more open about internal company operations.
The European Union has been putting pressure on Facebook and its tech industry competitors to remove extremist content more rapidly or face legislation forcing them to do so, and the sector has increased efforts to demonstrate progress.
Of the 1.9 million pieces of extremist content, the “vast majority” was removed and a small portion received a warning label because it was shared for informational or counter-extremist purposes, Facebook said in a post on a
corporate blog.
Facebook uses automated software such as image matching to detect some extremist material. The median time required for takedowns was less than one minute in the first quarter of the year, the company said.
Facebook, which bans terrorists from its network, has not previously said what its definition encompasses.
The company said it defines terrorism as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
The definition is “agnostic to ideology,” the company said, including such varied groups as religious extremists, white supremacists and militant environmentalists.
…
Facebook Says It is Taking Down More Material About ISIS, al-Qaida
Facebook said on Monday that it removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaida in the first three months of the year, or about double the amount from the previous quarter.
Facebook, the world’s largest social media network, also published its internal definition of “terrorism” for the first time, as part of an effort to be more open about internal company operations.
The European Union has been putting pressure on Facebook and its tech industry competitors to remove extremist content more rapidly or face legislation forcing them to do so, and the sector has increased efforts to demonstrate progress.
Of the 1.9 million pieces of extremist content, the “vast majority” was removed and a small portion received a warning label because it was shared for informational or counter-extremist purposes, Facebook said in a post on a
corporate blog.
Facebook uses automated software such as image matching to detect some extremist material. The median time required for takedowns was less than one minute in the first quarter of the year, the company said.
Facebook, which bans terrorists from its network, has not previously said what its definition encompasses.
The company said it defines terrorism as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
The definition is “agnostic to ideology,” the company said, including such varied groups as religious extremists, white supremacists and militant environmentalists.
…
Technology is Latest Trend Reshaping Fashion
Imagine wearing a computer in the form of a jacket. Now, it is possible.
“When somebody calls you, your jacket vibrates and gives you lights and [you] know somebody is calling you,” said Ivan Poupyrev, who manages the Google’s Project Jacquard, a digital platform for smart clothing.
Project Jacquard formed a partnership with Levi’s to create the first Jacquard enabled garment in the form of Levi’s Commuter Trucker Jacket. What makes the jacket “smart” includes washable technology, created by Google, woven into the cuff of the jacket.
“These are highly conductive fibers, which are very strong and can be used in standard denim-weaving process,” said Poupyrev.
A tap on the cuff can also provide navigation and play music when paired with a mobile phone, headphones and a small piece of removable hardware, called a snap tag, that attaches to the cuff.
“You get the most important features of the phone without taking your eyes off the road,” said Paul Dillinger, vice president of global product innovation for Levi Strauss & Co.
Smart clothing
The Levi’s jacket is just one step to smarter clothing.
“Do they want to make shoes? Do they want to make bags? Do they want to make trousers?” Poupyrev explained, “The platform [is] being designed so that this technology can be applied to any type of garment. Right now, it’s Levi’s but right now, we’re very actively working with other partners in the apparel industry and try to help to make their products connected.”
That means designers need to be increasingly tech savvy.
“Fashion designers in the future are going to have to think about their craft differently. So, it’s not just sketching and pattern making and draping and drafting. It’s going to involve use case development and being a participant in cladding an app and becoming an industrial designer and figuring out what you want these components to look like.” Dillinger added, “What we found out is engineers and designers are kind of the same thing. They just use very different languages.”
New patterns and materials
From the functionality of clothes to how they are made, computing power is reshaping fashion. Designers can create structures and patterns that have never existed before current technology.
“Designers now have a new set of tools to actually design things they could never design before. We can use computational tools to make patterns and formats that we could not do individually, because they were too mathematically and technically complicated. So, we’re using algorithms to help us facilitate design,” said Syuzi Pakhchyan whose job is to envision the future as experience design lead at the innovation firm, BCG Digital Ventures.
New technologies are also being used to make bioengineered fabrics made with yeast cells in a lab. The company, Bolt Threads, is developing fabrics made out of spider silk.
“We take the DNA out of spiders, put it in yeast, grow it in a big tank like brewing beer or wine and then purify the material, the polymer and spin it into fibers so it’s a very deep technology that’s required many years to develop,” Dan Widmaier, chief executive officer and co-founder of Bolt Threads.
The company Modern Meadow grows leather from yeast cells.
“We engineer them to produce collagen which is the same natural protein that you find in your skin or an animal skin, and then we really grow billions of those cells, make a lot of collagen, purify it and then assemble it into whatever kinds of materials, the brands, the designers that we’re working with would like to see,” explained Suzanne Lee chief creative officer of Modern Meadow.
She said these bioengineered materials are more sustainable and can be described as both natural and man-made.
“So, we’re really bringing both of those fields together to create a new material revolution. The best of nature with the best of design and engineering,” said Lee
What’s hot and what’s not
Technology is also disrupting fashion trends.The prevalence of social media means it is not just the designers who decide what is the latest trendy styles in fashion.
“Fashion has been democratized. A lot of fashion is being made by influencers with zero design experience,” said Pakhchyan.
Replacing trend forecasters, artificial intelligence can now collect data from social media and the web to give designers insight on public preferences.
“This is actually I think changing the role of the designer. Cause now, you have all this information so what are you going to do with this information?” said Pakhchyan.
Shopping on-line
How clothes are marketed and sold are also increasingly dependent on technology. If a consumer has shopped on a website once, that data is collected to entice the user to buy other products through personalization.
“When I connect online with a brand, they know me. I feel like they know me. They know who I am, they know what I like, they know what I want,” said Pakhchyan.
The Levi’s smart jacket can also be purchased online. The price tag: $350.
…