All posts by MTechnology

Flying Taxi Start-Up Hires Designer Behind Modern Mini, Fiat 500

Lilium, a German start-up with Silicon Valley-scale ambitions to put electric “flying taxis” in the air next decade, has hired Frank Stephenson, the designer behind iconic car brands including the modern Mini, Fiat 500 and McLaren P1.

Lilium is developing a lightweight aircraft powered by 36 electric jet engines mounted on its wings. It aims to travel at speeds of up to 300 kilometers (186 miles) per hour, with a range of 300 km on a single charge, the firm has said.

Founded in 2015 by four Munich Technical University students, the Bavarian firm has set out plans to demonstrate a fully functional vertical take-off electric jet by next year, with plans to begin online booking of commuter flights by 2025.

It is one of a number of companies, from Chinese automaker Geely to U.S. ride-sharing firm Uber, looking to tap advances in drone technology, high-performance materials and automated driving to turn aerial driving – long a staple of science fiction movies like “Blade Runner” – into reality.

Stephenson, 58, who holds American and British citizenship, will join the aviation start-up in May. He lives west of London and will commute weekly to Lilium’s offices outside of Munich.

His job is to design a plane on the outside and a car inside.

Famous for a string of hits at BMW, Mini, Ferrari, Maserati, Fiat, Alfa Romeo and McLaren, Stephenson will lead all aspects of Lilium design, including the interior and exterior of its jets, the service’s landing pads and even its departure lounges.

“With Lilium, we don’t have to base the jet on anything that has been done before,” Stephenson told Reuters in an interview.

“What’s so incredibly exciting about this is we’re not talking about modifying a car to take to the skies, and we are not talking about modifying a helicopter to work in a better way.”

Stephenson recalled working at Ferrari a dozen years ago and thinking it was the greatest job a grown-up kid could ever want.

But the limits of working at such a storied carmaker dawned on him: “I always had to make a car that looked like a Ferrari.”

His move to McLaren, where he worked from 2008 until 2017, freed him to design a new look and design language from scratch: “That was as good as it gets for a designer,” he said.

Lilium is developing a five-seat flying electric vehicle for commuters after tests in 2017 of a two-seat jet capable of a mid-air transition from hover mode, like drones, into wing-borne flight, like conventional aircraft.

Combining these two features is what separates Lilium from rival start-ups working on so-called flying cars or taxis that rely on drone or helicopter-like technologies, such as German rival Volocopter or European aerospace giant Airbus.

“If the competitors come out there with their hovercraft or drones or whatever type of vehicles, they’ll have their own distinctive look,” Stephenson said.

“Let the other guys do whatever they want. The last thing I want to do is anything that has been done before.”

The jet, with power consumption per kilometer comparable to an electric car, could offer passenger flights at prices taxis now charge but at speeds five times faster, Lilium has said.

Nonetheless, flying cars face many hurdles, including convincing regulators and the public that their products can be used safely. Governments are still grappling with regulations for drones and driverless cars.

Lilium has raised more than $101 million in early-stage funding from backers including an arm of China’s Tencent and Atomico and Obvious Ventures, the venture firms, respectively, of the co-founders of Skype and Twitter.    

 

$1*/ mo hosting! Get going with us!

Flying Taxi Start-Up Hires Designer Behind Modern Mini, Fiat 500

Lilium, a German start-up with Silicon Valley-scale ambitions to put electric “flying taxis” in the air next decade, has hired Frank Stephenson, the designer behind iconic car brands including the modern Mini, Fiat 500 and McLaren P1.

Lilium is developing a lightweight aircraft powered by 36 electric jet engines mounted on its wings. It aims to travel at speeds of up to 300 kilometers (186 miles) per hour, with a range of 300 km on a single charge, the firm has said.

Founded in 2015 by four Munich Technical University students, the Bavarian firm has set out plans to demonstrate a fully functional vertical take-off electric jet by next year, with plans to begin online booking of commuter flights by 2025.

It is one of a number of companies, from Chinese automaker Geely to U.S. ride-sharing firm Uber, looking to tap advances in drone technology, high-performance materials and automated driving to turn aerial driving – long a staple of science fiction movies like “Blade Runner” – into reality.

Stephenson, 58, who holds American and British citizenship, will join the aviation start-up in May. He lives west of London and will commute weekly to Lilium’s offices outside of Munich.

His job is to design a plane on the outside and a car inside.

Famous for a string of hits at BMW, Mini, Ferrari, Maserati, Fiat, Alfa Romeo and McLaren, Stephenson will lead all aspects of Lilium design, including the interior and exterior of its jets, the service’s landing pads and even its departure lounges.

“With Lilium, we don’t have to base the jet on anything that has been done before,” Stephenson told Reuters in an interview.

“What’s so incredibly exciting about this is we’re not talking about modifying a car to take to the skies, and we are not talking about modifying a helicopter to work in a better way.”

Stephenson recalled working at Ferrari a dozen years ago and thinking it was the greatest job a grown-up kid could ever want.

But the limits of working at such a storied carmaker dawned on him: “I always had to make a car that looked like a Ferrari.”

His move to McLaren, where he worked from 2008 until 2017, freed him to design a new look and design language from scratch: “That was as good as it gets for a designer,” he said.

Lilium is developing a five-seat flying electric vehicle for commuters after tests in 2017 of a two-seat jet capable of a mid-air transition from hover mode, like drones, into wing-borne flight, like conventional aircraft.

Combining these two features is what separates Lilium from rival start-ups working on so-called flying cars or taxis that rely on drone or helicopter-like technologies, such as German rival Volocopter or European aerospace giant Airbus.

“If the competitors come out there with their hovercraft or drones or whatever type of vehicles, they’ll have their own distinctive look,” Stephenson said.

“Let the other guys do whatever they want. The last thing I want to do is anything that has been done before.”

The jet, with power consumption per kilometer comparable to an electric car, could offer passenger flights at prices taxis now charge but at speeds five times faster, Lilium has said.

Nonetheless, flying cars face many hurdles, including convincing regulators and the public that their products can be used safely. Governments are still grappling with regulations for drones and driverless cars.

Lilium has raised more than $101 million in early-stage funding from backers including an arm of China’s Tencent and Atomico and Obvious Ventures, the venture firms, respectively, of the co-founders of Skype and Twitter.    

 

$1*/ mo hosting! Get going with us!

Facebook Rules at a Glance: What’s Banned, Exactly?

Facebook has revealed for the first time just what, exactly, is banned on its service in a new Community Standards document released on Tuesday. It’s an updated version of the internal rules the company has used to determine what’s allowed and what isn’t, down to granular details such as what, exactly, counts as a “credible threat” of violence. The previous public-facing version gave a broad-strokes outline of the rules, but the specifics were shrouded in secrecy for most of Facebook’s 2.2 billion users.

Not anymore. Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules – it has just made them public.

Credible violence

Is there a real-world threat? Facebook looks for “credible statements of intent to commit violence against any person, groups of people, or place (city or smaller).” Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as “heads-of-state, witnesses and confidential informants, activists, and journalists.”

Also banned: instructions on “on how to make or use weapons if the goal is to injure or kill people,” unless there is “clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country’s military, commercial video games, or news coverage).”

Hate speech

“We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status,” Facebook says. As to what counts as a direct attack, the company says it’s any “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to “exclude or segregate” a person our group based on the protected characteristics. Facebook does note that it does “allow criticism of immigration policies and arguments for restricting those policies.”

Graphic violence

Images of violence against “real people or animals” with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or “indicating the poster is sharing footage for sensational viewing pleasure” are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.

Child sexual exploitation

“We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images,” Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes “uncovered female nipples for children older than toddler-age.”

Adult nudity and sexual activity

“We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring,” Facebook says. That said, the company says it “defaults” to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art – such as drawings, paintings or sculptures – is an exception.

 

$1*/ mo hosting! Get going with us!

Facebook Rules at a Glance: What’s Banned, Exactly?

Facebook has revealed for the first time just what, exactly, is banned on its service in a new Community Standards document released on Tuesday. It’s an updated version of the internal rules the company has used to determine what’s allowed and what isn’t, down to granular details such as what, exactly, counts as a “credible threat” of violence. The previous public-facing version gave a broad-strokes outline of the rules, but the specifics were shrouded in secrecy for most of Facebook’s 2.2 billion users.

Not anymore. Here are just some examples of what the rules ban. Note: Facebook has not changed the actual rules – it has just made them public.

Credible violence

Is there a real-world threat? Facebook looks for “credible statements of intent to commit violence against any person, groups of people, or place (city or smaller).” Is there a bounty or demand for payment? The mention or an image of a specific weapon? A target and at least two details such as location, method or timing? A statement to commit violence against a vulnerable person or group such as “heads-of-state, witnesses and confidential informants, activists, and journalists.”

Also banned: instructions on “on how to make or use weapons if the goal is to injure or kill people,” unless there is “clear context that the content is for an alternative purpose (for example, shared as part of recreational self-defense activities, training by a country’s military, commercial video games, or news coverage).”

Hate speech

“We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease. We also provide some protections for immigration status,” Facebook says. As to what counts as a direct attack, the company says it’s any “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.” There are three tiers of severity, ranging from comparing a protected group to filth or disease to calls to “exclude or segregate” a person our group based on the protected characteristics. Facebook does note that it does “allow criticism of immigration policies and arguments for restricting those policies.”

Graphic violence

Images of violence against “real people or animals” with comments or captions that contain enjoyment of suffering, humiliation and remarks that speak positively of the violence or “indicating the poster is sharing footage for sensational viewing pleasure” are prohibited. The captions and context matter in this case because Facebook does allow such images in some cases where they are condemned, or shared as news or in a medical setting. Even then, though, the post must be limited so only adults can see them and Facebook adds a warnings screen to the post.

Child sexual exploitation

“We do not allow content that sexually exploits or endangers children. When we become aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC), in compliance with applicable law. We know that sometimes people share nude images of their own children with good intentions; however, we generally remove these images because of the potential for abuse by others and to help avoid the possibility of other people reusing or misappropriating the images,” Facebook says. Then, it lists at least 12 specific instances of children in a sexual context, saying the ban includes, but is not limited to these examples. This includes “uncovered female nipples for children older than toddler-age.”

Adult nudity and sexual activity

“We understand that nudity can be shared for a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons. Where such intent is clear, we make allowances for the content. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring,” Facebook says. That said, the company says it “defaults” to removing sexual imagery to prevent the sharing of non-consensual or underage content. The restrictions apply to images of real people as well as digitally created content, although art – such as drawings, paintings or sculptures – is an exception.

 

$1*/ mo hosting! Get going with us!

Cambridge Analytica Fights Back on Data Scandal

Cambridge Analytica unleashed its counterattack against claims that it misused data from millions of Facebook accounts, saying Tuesday it is the victim of misunderstandings and inaccurate reporting that portrays the company as the evil villain in a James Bond movie.

Clarence Mitchell, a high-profile publicist recently hired to represent the company, held Cambridge Analytica’s first news conference since allegations surfaced that the Facebook data helped Donald Trump win the 2016 presidential election. Christopher Wylie, a former employee of Cambridge Analytica’s parent, also claims that the company has links to the successful campaign to take Britain out of the European Union.

“The company has been portrayed in some quarters as almost some Bond villain,” Mitchell said. “Cambridge Analytica is no Bond villain.”

Cambridge Analytica didn’t use any of the Facebook data in the work it did for Trump’s campaign and it never did any work on the Brexit campaign, Mitchell said. Furthermore, he said, the data was collected by another company that was contractually obligated to follow data protection rules and the information was deleted as soon as Facebook raised concerns.

Mitchell insists the company has not broken any laws, but acknowledged it had commissioned an independent investigation is being conducted. He insisted that the company had been victimized by “wild speculation based on misinformation, misunderstanding, or in some cases, frankly, an overtly political position.”

The comments come weeks after the scandal engulfed both the consultancy and Facebook, which has been embroiled in scandal since revelations that Cambridge Analytica misused personal information from as many as 87 million Facebook accounts. Facebook’s CEO Mark Zuckerberg testified before the U.S. congressional committees and at one point the company lost some $50 billion in value for its shareholders.

Details on the scandal continued to trickle out. On Tuesday, a Cambridge University academic said the suspended CEO of Cambridge Analytica lied to British lawmakers investigating fake news.

Academic Aleksandr Kogan’s company, Global Science Research, developed a Facebook app that vacuumed up data from people who signed up to use the app as well as information from their Facebook friends, even if those friends hadn’t agreed to share their data.

Cambridge Analytica allegedly used the data to profile U.S. voters and target them with ads during the 2016 election to help elect Donald Trump. It denies the charge.

Kogan appeared before the House of Commons’ media committee Tuesday and was asked whether Cambridge Analytica’s suspended CEO, Alexander Nix, told the truth when he testified that none of the company’s data came from Global Science Research.

“That’s a fabrication,” Kogan told committee Chairman Damian Collins. Nix could not immediately be reached for comment.

Kogan also cast doubt on many of Wylie’s allegations, which have triggered a global debate about internet privacy protections. Wylie repeated his claims in a series of media interviews as well as an appearance before the committee.

Wylie worked for SCL Group Ltd. in 2013 and 2014.

“Mr. Wylie has invented many things,” Kogan said, calling him “duplicitous.”

No matter what, though, Kogan insisted in his testimony that the data would not be that useful to election consultants. The idea was seized upon by Mitchell, who also denied that the company had worked on the effort to have Britain leave the EU.

Mitchell said that the idea that political consultancies can use data alone to sway votes is “frankly insulting to the electorates. Data science in modern campaigning helps those campaigns, but it is still and always will be the candidates who win the races.”

$1*/ mo hosting! Get going with us!

China Tech Firms Pledge to End Sexist Job Ads

Chinese tech firms pledged on Monday to tackle gender bias in recruitment after a rights group said they routinely favored male candidates, luring applicants with the promise of working with “beautiful girls” in job advertisements.

A Human Rights Watch (HRW) report found that major technology companies including Alibaba, Baidu and Tencent had widely used “gender discriminatory job advertisements,” which said men were preferred or specifically barred women applicants.

Some ads promised candidates they would work with “beautiful girls” and “goddesses,” HRW said in a report based on an analysis of 36,000 job posts between 2013 and 2018.

Tencent, which runs China’s most popular messenger app WeChat, apologized for the ads after the HRW report was published on Monday.

“We are sorry they occurred and we will take swift action to ensure they do not happen again,” a Tencent spokesman told the Thomson Reuters Foundation.

E-commerce giant Alibaba, founded by billionaire Jack Ma, vowed to conduct stricter reviews to ensure its job ads followed workplace equality principles, but refused to say whether the ads singled out in the report were still being used.

“Our track record of not just hiring but promoting women in leadership positions speaks for itself,” said a spokeswoman.

Baidu, the Chinese equivalent of search engine Google, meanwhile said the postings were “isolated instances.”

HRW urged Chinese authorities to take action to end discriminatory hiring practices.

Its report also found nearly one in five ads for Chinese government jobs this year were “men only” or “men preferred.”

“Sexist job ads pander to the antiquated stereotypes that persist within Chinese companies,” HRW China director Sophie Richardson said in a statement.

“These companies pride themselves on being forces of modernity and progress, yet they fall back on such recruitment strategies, which shows how deeply entrenched discrimination against women remains in China,” she added.

China was ranked 100 out of 144 countries in the World Economic Forum’s 2017 Gender Gap Report, after it said the country’s progress towards gender parity has slowed.

$1*/ mo hosting! Get going with us!

China Tech Firms Pledge to End Sexist Job Ads

Chinese tech firms pledged on Monday to tackle gender bias in recruitment after a rights group said they routinely favored male candidates, luring applicants with the promise of working with “beautiful girls” in job advertisements.

A Human Rights Watch (HRW) report found that major technology companies including Alibaba, Baidu and Tencent had widely used “gender discriminatory job advertisements,” which said men were preferred or specifically barred women applicants.

Some ads promised candidates they would work with “beautiful girls” and “goddesses,” HRW said in a report based on an analysis of 36,000 job posts between 2013 and 2018.

Tencent, which runs China’s most popular messenger app WeChat, apologized for the ads after the HRW report was published on Monday.

“We are sorry they occurred and we will take swift action to ensure they do not happen again,” a Tencent spokesman told the Thomson Reuters Foundation.

E-commerce giant Alibaba, founded by billionaire Jack Ma, vowed to conduct stricter reviews to ensure its job ads followed workplace equality principles, but refused to say whether the ads singled out in the report were still being used.

“Our track record of not just hiring but promoting women in leadership positions speaks for itself,” said a spokeswoman.

Baidu, the Chinese equivalent of search engine Google, meanwhile said the postings were “isolated instances.”

HRW urged Chinese authorities to take action to end discriminatory hiring practices.

Its report also found nearly one in five ads for Chinese government jobs this year were “men only” or “men preferred.”

“Sexist job ads pander to the antiquated stereotypes that persist within Chinese companies,” HRW China director Sophie Richardson said in a statement.

“These companies pride themselves on being forces of modernity and progress, yet they fall back on such recruitment strategies, which shows how deeply entrenched discrimination against women remains in China,” she added.

China was ranked 100 out of 144 countries in the World Economic Forum’s 2017 Gender Gap Report, after it said the country’s progress towards gender parity has slowed.

$1*/ mo hosting! Get going with us!

Facebook Says It is Taking Down More Material About ISIS, al-Qaida

Facebook said on Monday that it removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaida in the first three months of the year, or about double the amount from the previous quarter.

Facebook, the world’s largest social media network, also published its internal definition of “terrorism” for the first time, as part of an effort to be more open about internal company operations.

The European Union has been putting pressure on Facebook and its tech industry competitors to remove extremist content more rapidly or face legislation forcing them to do so, and the sector has increased efforts to demonstrate progress.

Of the 1.9 million pieces of extremist content, the “vast majority” was removed and a small portion received a warning label because it was shared for informational or counter-extremist purposes, Facebook said in a post on a

corporate blog.

Facebook uses automated software such as image matching to detect some extremist material. The median time required for takedowns was less than one minute in the first quarter of the year, the company said.

Facebook, which bans terrorists from its network, has not previously said what its definition encompasses.

The company said it defines terrorism as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”

The definition is “agnostic to ideology,” the company said, including such varied groups as religious extremists, white supremacists and militant environmentalists.

$1*/ mo hosting! Get going with us!

Facebook Says It is Taking Down More Material About ISIS, al-Qaida

Facebook said on Monday that it removed or put a warning label on 1.9 million pieces of extremist content related to ISIS or al-Qaida in the first three months of the year, or about double the amount from the previous quarter.

Facebook, the world’s largest social media network, also published its internal definition of “terrorism” for the first time, as part of an effort to be more open about internal company operations.

The European Union has been putting pressure on Facebook and its tech industry competitors to remove extremist content more rapidly or face legislation forcing them to do so, and the sector has increased efforts to demonstrate progress.

Of the 1.9 million pieces of extremist content, the “vast majority” was removed and a small portion received a warning label because it was shared for informational or counter-extremist purposes, Facebook said in a post on a

corporate blog.

Facebook uses automated software such as image matching to detect some extremist material. The median time required for takedowns was less than one minute in the first quarter of the year, the company said.

Facebook, which bans terrorists from its network, has not previously said what its definition encompasses.

The company said it defines terrorism as: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”

The definition is “agnostic to ideology,” the company said, including such varied groups as religious extremists, white supremacists and militant environmentalists.

$1*/ mo hosting! Get going with us!

Technology is Latest Trend Reshaping Fashion

Imagine wearing a computer in the form of a jacket. Now, it is possible.

“When somebody calls you, your jacket vibrates and gives you lights and [you] know somebody is calling you,” said Ivan Poupyrev, who manages the Google’s Project Jacquard, a digital platform for smart clothing.

Project Jacquard formed a partnership with Levi’s to create the first Jacquard enabled garment in the form of Levi’s Commuter Trucker Jacket. What makes the jacket “smart” includes washable technology, created by Google, woven into the cuff of the jacket.

“These are highly conductive fibers, which are very strong and can be used in standard denim-weaving process,” said Poupyrev.

A tap on the cuff can also provide navigation and play music when paired with a mobile phone, headphones and a small piece of removable hardware, called a snap tag, that attaches to the cuff.

“You get the most important features of the phone without taking your eyes off the road,” said Paul Dillinger, vice president of global product innovation for Levi Strauss & Co.

Smart clothing

The Levi’s jacket is just one step to smarter clothing.

“Do they want to make shoes? Do they want to make bags? Do they want to make trousers?” Poupyrev explained, “The platform [is] being designed so that this technology can be applied to any type of garment. Right now, it’s Levi’s but right now, we’re very actively working with other partners in the apparel industry and try to help to make their products connected.”

That means designers need to be increasingly tech savvy.

“Fashion designers in the future are going to have to think about their craft differently. So, it’s not just sketching and pattern making and draping and drafting. It’s going to involve use case development and being a participant in cladding an app and becoming an industrial designer and figuring out what you want these components to look like.” Dillinger added, “What we found out is engineers and designers are kind of the same thing. They just use very different languages.”

New patterns and materials

From the functionality of clothes to how they are made, computing power is reshaping fashion. Designers can create structures and patterns that have never existed before current technology.

“Designers now have a new set of tools to actually design things they could never design before. We can use computational tools to make patterns and formats that we could not do individually, because they were too mathematically and technically complicated. So, we’re using algorithms to help us facilitate design,” said Syuzi Pakhchyan whose job is to envision the future as experience design lead at the innovation firm, BCG Digital Ventures.

New technologies are also being used to make bioengineered fabrics made with yeast cells in a lab. The company, Bolt Threads, is developing fabrics made out of spider silk.

“We take the DNA out of spiders, put it in yeast, grow it in a big tank like brewing beer or wine and then purify the material, the polymer and spin it into fibers so it’s a very deep technology that’s required many years to develop,” Dan Widmaier, chief executive officer and co-founder of Bolt Threads.

The company Modern Meadow grows leather from yeast cells.

“We engineer them to produce collagen which is the same natural protein that you find in your skin or an animal skin, and then we really grow billions of those cells, make a lot of collagen, purify it and then assemble it into whatever kinds of materials, the brands, the designers that we’re working with would like to see,” explained Suzanne Lee chief creative officer of Modern Meadow.

She said these bioengineered materials are more sustainable and can be described as both natural and man-made.

“So, we’re really bringing both of those fields together to create a new material revolution. The best of nature with the best of design and engineering,” said Lee

What’s hot and what’s not

Technology is also disrupting fashion trends.The prevalence of social media means it is not just the designers who decide what is the latest trendy styles in fashion.

“Fashion has been democratized. A lot of fashion is being made by influencers with zero design experience,” said Pakhchyan.

Replacing trend forecasters, artificial intelligence can now collect data from social media and the web to give designers insight on public preferences.

“This is actually I think changing the role of the designer. Cause now, you have all this information so what are you going to do with this information?” said Pakhchyan.

Shopping on-line

How clothes are marketed and sold are also increasingly dependent on technology. If a consumer has shopped on a website once, that data is collected to entice the user to buy other products through personalization.

“When I connect online with a brand, they know me. I feel like they know me. They know who I am, they know what I like, they know what I want,” said Pakhchyan.

The Levi’s smart jacket can also be purchased online. The price tag: $350.

$1*/ mo hosting! Get going with us!