Search This Blog

Tuesday, June 30, 2015

Rebooting the Automobile

As reported by MIT Technology Review: “Where would you like to go?” Siri asked.

It was a sunny, slightly dreamy morning in the heart of Silicon Valley, and I was sitting in the passenger seat of what seemed like a perfectly ordinary new car. There was something strangely Apple-like about it, though. There was no mistaking the apps arranged across the console screen, nor the deadpan voice of Apple’s virtual assistant, who, as backseat drivers go, was pretty helpful. Summoned via a button on the steering wheel and asked to find sushi nearby, Siri read off the names of a few restaurants in the area, waited for me to pick one, and then showed the way on a map that appeared on the screen.

The vehicle was, in fact, a Hyundai Sonata. The Apple-like interface was coming from an iPhone connected by a cable. Most carmakers have agreed to support software from Apple called CarPlay, as well as a competing product from Google, called Android Auto, in part to address a troubling trend: according to research from the National Safety Council, a nonprofit group, more than 25 percent of road accidents are a result of a driver’s fiddling with a phone. Hyundai’s car, which goes on sale this summer, will be one of the first to support CarPlay, and the carmaker had made the Sonata available so I could see how the software works.


CarPlay certainly seemed more intuitive and less distracting than fiddling with a smartphone behind the wheel. Siri felt like a better way to send texts, place calls, or find directions. The system has obvious limitations: if a phone loses the signal or its battery dies, for example, it will stop working fully. And Siri can’t always be relied upon to hear you correctly. Still, I would’ve gladly used CarPlay in the rental car I’d picked up at the San Francisco airport: a 2013 Volkswagen Jetta. There was little inside besides an air-conditioning unit and a radio. The one technological luxury, ironically, was a 30-pin cable for an outdated iPhone. To use my smartphone for navigation, I needed a suction mount, an adapter for charging through the cigarette lighter, and good eyesight. More than once as I drove around, my iPhone came unstuck from the windshield and bounced under the passenger seat.

Android Auto also seemed like a huge improvement. When a Google product manager, Daniel Holle, took me for a ride in another Hyundai Sonata, he plugged his Nexus smartphone into the car and the touch screen was immediately taken over by Google Now, a kind of super-app that provides recommendations based on your location, your Web searches, your Gmail messages, and so on. In our case it was showing directions to a Starbucks because Holle had searched for coffee just before leaving his office. Had a ticket for an upcoming flight been in his in-box, Holle explained, Google Now would’ve automatically shown directions to the airport. “A big part of why we’re doing it is driver safety,” he said. “But there’s also this huge opportunity for digital experience in the car. This is a smart driving assistant.”

CarPlay and Android Auto not only give Apple and Google a foothold in the automobile but may signal the start of a more significant effort by these companies to reinvent the car. If they could tap into the many different computers that control car systems, they could use their software expertise to reimagine functions such as steering or collision avoidance. They could create operating systems for cars.

Google has already built its own self-driving cars, using a combination of advanced sensors, mapping data, and clever navigation and control software. There are indications that Apple is now working on a car too: though the company won’t comment on what it terms “rumors and speculation,” it is hiring dozens of people with expertise in automotive design, engineering, and strategy. Vans that belong to Apple, fitted with sensors that might be useful for automated driving, have been spotted cruising around California.

After talking to numerous people with knowledge of the car industry, I believe an Apple car is entirely plausible. But it almost doesn’t matter. The much bigger opportunity for Apple and Google will be in developing software that will add new capabilities to any car: not just automated driving but also advanced diagnostics and over-the-air software upgrades and repairs. Already, a button at the bottom of the Android Auto interface is meant for future apps that could show vehicle diagnostics. Google expects these apps to be made by carmakers at first, showing more advanced vehicle data than the mysterious engine light that flashes when something goes wrong. Google would like to make use of such car data too, Holle says. Perhaps if Android Auto knew that your engine was overheating, Google Now could plan a trip to a nearby mechanic for you.

At least for now, though, the Google and Apple services essentially can read only basic vehicle data like whether a car is in drive, park, or reverse. Carmakers won’t let those companies put their software deeper into the brains of the car, and whether that will change is a crucial question. After all, modern cars depend on computers to run just about everything, from the entertainment console to the engine pistons, and whoever supplies the software for these systems will shape automotive innovation. Instead of letting Apple and Google define their future, carmakers are opening or expanding labs in Silicon Valley in an attempt to fend off the competition and more fully embrace the possibilities offered by software.

The car could be on the verge of its biggest reinvention yet—but can carmakers do it themselves? Or will they give up the keys?

Cultural shift
Cars are far more computerized than they might seem. Automakers began using integrated circuits to monitor and control basic engine functions in the late 1970s; computerization accelerated in the 1980s as regulations on fuel efficiency and emissions were put in place, requiring even better engine control. In 1982, for instance, computers began taking full control of the automatic transmission in some models.

New cars now have between 50 and 100 computers and run millions of lines of code. An internal network connects these computers, allowing a mechanic or dealer to assess a car’s health through a diagnostic port just below the steering wheel. Some carmakers diagnose problems with vehicles remotely, through a wireless link, and it’s possible to plug a gadget into your car’s diagnostic port to identify engine problems or track driving habits via a smartphone app.


However, until now we haven’t seen software make significant use of all these computer systems. There is no common operating system. Given that carmakers are preventing CarPlay or Android Auto from playing that role, it’s clear that the auto companies are taking a first crack at it. How successful they are will depend on how ambitious and creative they are. Roughly 10 minutes north of Google’s office, I got to see how one of the oldest car companies is beginning to think about this possibility.
Ford opened its research lab in Palo Alto in January. Located one door down from Skype and just around the corner from Hewlett-Packard, it looks like a typical startup space. There are red beanbags, 3-D printers, and rows of empty desks, which the company hopes to fill with more than a hundred engineers. I met a user-interface designer named Casey ­Feldman. He was perched atop a balance board at a standing desk, working on Ford’s latest infotainment system, Sync 3. It runs software Ford has developed, but the automaker is working on ways to hand the screen over to CarPlay or Android Auto if you plug in a smartphone. Feldman was using a box about the size of a mini-fridge, with a touch screen and dashboard controls, to test the software. He showed how Sync 3 displays a simplified interface when the car is traveling at high speed.

Ford’s first touch-screen interface, called MyFord Touch, didn’t go well. Introduced in 2010, it was plagued by bugs, and customers complained that it was overcomplicated. When Ford dropped from 10th to 20th place in Consumer Reports’ annual reliability ratings in 2011, MyFord Touch was cited as a key problem. The company ended up sending out more than 250,000 memory sticks containing software fixes for customers to upload to their cars.

Besides running apps like Spotify and Pandora Radio, Sync 3 can connect to a home Wi-Fi network to receive bug fixes and updates for the console software. Ford clearly hopes that drivers will prefer its system to either CarPlay or Android Auto, and it’s doing its best to make it compelling. “It’s a cultural shift,” says Dragos Maciuca, the lab’s technical director. The lab wants to incorporate “some of the Silicon Valley attitudes, but also processes” into the automotive industry, he says. “That is clearly going to be very challenging, but that’s why we’re here. It doesn’t make sense that you buy a car, and the first thing you do is buy a $5 suction cup for your phone.”

Ford has been ahead of many automakers in its experimentation. It has come out with a module known as Open XC, which lets people download a wide range of sensor data from their cars and develop apps to aid their driving. A Ford engineer used it to create a shift knob for cars with manual transmission so that the stick lights up or buzzes when it’s time to change gears. But Open XC has not taken off widely, and despite Ford’s best efforts, the company’s overall approach still seems somewhat conservative. Maciuca and others said they were wary of alienating Ford’s vast and diverse customer base.

In February, meanwhile, the chip maker Nvidia announced two new products designed to give cars considerably more computing power. One is capable of rendering 3-D graphics on up to three different in-car displays at once. The other can collect and process data from up to 12 cameras around a car, and it features machine-learning software that can help collision-avoidance systems or even automated driving systems recognize certain obstacles on the road. These two systems point to the huge opportunity that advanced automotive sensors and computer systems offer to software makers. “We’re arguing now you need supercomputing in the car,” Danny Shapiro, senior director of automotive at Nvidia, told me.

One of the cars at Stanford’s Dynamic Design Lab.
If anyone could find a great use for a supercomputer on wheels, it’s Chris Gerdes, a professor of mechanical engineering who leads Stanford University’s Dynamic Design Lab. Gerdes originally studied robotics as a graduate student, but while pursuing a PhD at Berkeley, he became interested in cars after rebuilding the engine of an old Chevy Cavalier. He drove me to the lab from his office in an incredibly messy Subaru Legacy.

Inside the lab, students were working away on several projects spread across large open spaces: a lightweight, solar-­powered car; a Ford Fusion covered in sensors; and a hand-built two-person vehicle resembling a dune buggy. Gerdes pointed to the Fusion. After Ford gave his students a custom software interface, they found it relatively easy to get the car to drive itself. Indeed, the ability to manipulate a car through software explains why many cars can already park themselves and automatically stay within a lane and maintain a safe distance from the vehicle ahead. In the coming years, several carmakers will introduce vehicles capable of driving themselves on highways for long periods. “There are so many things you can do now to innovate that don’t necessarily require that you bend sheet metal,” Gerdes said as we walked around. “The car is a platform for all sorts of things, and many of those things can be tried in software.”

The dune-buggy-like car takes programmability to the extreme. Virtually every component is controlled by an actuator connected to a computer. This means that software can configure each wheel to behave in a way that makes an ordinary road feel as if it were covered with ice. Or, using data from sensors fitted to the front of the car, it can be configured to help a novice motorist react like a race-car driver. The idea is to explore how computers could make driving safer and more efficient without taking control away from the driver completely.

In fact, one small carmaker—headquartered in Silicon Valley—shows how transformational an aggressive approach to software innovation could be.

Drive safely
Tesla Motors, based in Palo Alto, has built what’s probably the world’s most computerized consumer car. The Model S, an electric sedan released in 2012, has a 17-inch touch-screen display, a 3G cellular connection, and even a Web browser. The touch screen shows entertainment apps, a map with nearby charging stations, and details about the car’s battery. But it can also be used to customize all sorts of vehicle settings, including those governing the suspension and the acceleration mode (depending on the model, it goes from “normal” to “sport” or from “sport” to “insane”).

Every few months, Tesla owners receive a software update that adds new functions to their vehicle. Since the Model S was released, these have included more detailed maps, better acceleration, a hill-start mode that stops the car from rolling backwards, and a blind-spot warning (providing a car has the right sensors). Tesla’s CEO, Elon Musk, has said a software patch released this summer would add automated highway driving to suitably equipped models.

These software updates can do more than just add new bells and whistles. Toward the end of 2013, the company faced a safety scare when several Model S cars caught fire after running over debris that ruptured their battery packs. Tesla engineers believed the fires to be rare events, and they knew of a simple fix, but it meant raising the suspension on every Model S on the road. Instead of requiring owners to bring their cars to a mechanic, Tesla released a patch over the airwaves that adjusted the suspension to keep the Model S elevated at higher speeds, greatly reducing the chance of further accidents. (In case customers wanted even more peace of mind, the company also offered a titanium shield that mechanics could install.)

Tesla’s efforts show how making cars more fully programmable can add value well after they roll out of the showroom. But software-defined vehicles could also become a juicy target for troublemakers.


In 2013, at the DEF CON conference in Las Vegas, two computer-security experts, Charlie Miller and Chris Valasek, showed that they could hijack the internal network of a 2010 Toyota Prius and remotely control critical features, including steering and braking. “No one really knows a lot about car security, or what it’s all about, because there hasn’t been a lot of research,” Miller told me. “It’s possible, if you went out and bought a 2013, they’ve done huge improvements—we don’t know. That’s one of the scary things about it.”

A few real-world incidents point to why car security might become a problem. In February 2010, dozens of cars around Texas suddenly refused to start and also, inexplicably, began sounding their horns. The cars had been fitted with devices that let the company that leased them, the Texas Auto Center, track them and then disable and recover them should the driver fail to make payments. Unfortunately, a disgruntled ex-employee with access to the company’s system was using those gadgets to cause havoc.

I asked Gerdes whether concerns over reliability and security could slow the computerization of cars. He said that didn’t have to be the case. “The key question is, ‘How fast can you move safely?’” he says. “The bet that many Silicon Valley companies are making—and that many car companies are making with their Valley offices—is that there are ways to move faster and still be safe.”

Ultimately, the opportunities may well outweigh such concerns. Tesla’s efforts point to how significant software innovation could turn out to be for carmakers. Tesla is even experimenting with connecting the forthcoming autopilot system to the car’s calendar, for example. The car could automatically pull up outside the front door just in time for the owner to drive to an upcoming appointment.

Perhaps this also explains why Apple and Google are now dabbling in vehicle hardware: so they can fully own some people’s driving time even before carmakers decide to open up more aspects of their vehicles. “Clearly Apple and Google would love to be the ones who have the operating system for these future cars,” Gerdes says.

As I drove back to the San Francisco airport, my VW Jetta felt more low-tech than ever. The ride was fairly peaceful, with the Santa Cruz Mountains looming in the distance. Even so, after so much driving, I would’ve been glad had Siri offered to take over.

Monday, June 29, 2015

Paired with AI and VR, Google Earth Could Change the Planet

As reported by Wired: THE JAMES RESERVE is a place where the natural meets the digital.

Part of the San Jacinto mountain range in Southern California, the James is a nature reserve that covers nearly 30 acres. It’s closed to the public. It’s off the grid. Vehicles aren’t allowed. But Sean Askay calls it “one of the most heavily instrumented places in the US.” Robots on high-tension cables drop climate sensors into this high-altitude forest. Bird’s nests include automated cameras and their own sensors. Overseen by the University of California, Riverside, the reserve doubles as a research field station for biologists, academics, and commercial scientists.

In 2005, as a master’s student at the university, Askay took the experiment further still, using Google Earth to create a visual interface for all those cameras and sensors. “Basically, I built a virtual representation of the entire reserve,” he says. “You could ‘fly in’ and look at live video feeds or temperature graphs from inside a bird box.”

Somewhere along the way, the project caught the eye of Google’s Vint Cerf, a founding father of the Internet, and in 2007, Askay moved to Mountain View, California, home to Google headquarters. There, he joined the team that ran Google Earth, a sweeping software service that blends satellite photos and other images to create a digital window onto our planet (and other celestial bodies). Since joining the company, the 36-year-old has used the tool to build maps of war casualties in Iraq and Afghanistan. He put the service on the International Space Station, so astronauts could better understand where they were. Working alongside Buzz Aldrin, he built a digital tour of the Apollo 11 moon landing.

Now, as Google Earth celebrates its 10th anniversary, Askay is taking over the entire project—as lead engineer—following the departure of founder Brian McClendon. He takes over at a time when the service is poised to evolve into a far more powerful research tool, an enormous echo of his work at the James Reserve. When it debuted in 2005, Google Earth was a wonderfully intriguing novelty. From your personal computer, you could zoom in on the roof of your house or get a bird’s eye view of the park where you made out with your first girlfriend. But it proved to be more than just a party trick. And with the rapid rise of two other digital technologies—neural networks and virtual reality—the possibilities will only expand.
Click to Open Overlay Gallery
Through an extension to Google’s Chrome web browser called Earth View, you can view “the most beautiful and striking” satellite images from around the world, “diving in” to places like Cuba. GOOGLE

A Visit to Prague
Neural networks—vast networks of machines that mimic the web of neurons in the human brain—can scour Google Earth in search of deforestation. They can track agricultural crops across the globe in an effort to identify future food shortages. They can examine the world’s oil tankers in an effort to predict gas prices. And it so happens that Google runs one of the most advanced neural networking operations in the world. For Google Earth, Askay says, “machine learning is the next frontier.”

According to Askay’s boss, Rebecca Moore, the company is already using neural networks to examine Google’s vast trove of satellite imagery. “We have the Google Brain,” she says, referring to the central neural networking operation Google has built inside the company, “and we’re doing some experiments.” That’s news. But it’s not that surprising. Two startups—Orbital Insight and Descartes Labs—are already doing much the same thing.

Meanwhile, virtual reality—as exhibited by headsets like Facebook’s Oculus Rift and Google Cardboard—is bringing a new level of fidelity and, indeed, realism to the kind of immersive digital experience offered by Google Earth. Today, using satellite imagery and street-level photos, Askay and Google are already building 3-D models of real-life places like Prague that you can visit from your desktop PC (see video at top). But in the near future, this experience will move into Oculus-like headsets, which can make you feel like you’re really there.

“We have so much interesting stuff,” Askay says of Google Earth’s massive collection of images. “How amazing would it be to experience Google Earth in that environment?”

The Google Outreach
Google isn’t the only one that will drive the evolution of Google Earth. In 2007, not long after taking the job at Google, Askay flew to Brazil, helping an indigenous tribe, the Surui, map deforestation in their area of the Amazon, and this gave rise to a wider project called Google Earth Engine. With Earth Engine, outside developers and companies can use Google’s enormous network of data centers to run sweeping calculations on the company’s satellite imagery and other environmental data, a digital catalog that dates back more than 40 years.

“So, if you want to look at 40 years of Landsat imagery and do change projection over time, you can,” Askay says. “You could do retrospective models of where deforestation took place and how fast, as well as predictive models and even near real-time detection. We’re getting to the point where we can start sending alerts saying that something that looks like deforestation has occurred in the last three days.”

As it stands, Earth Engine is only available to a limited number of outsiders, but Askay and Moore say Google plans to gradually open it up to a much larger audience. With a project called Map of Life, independent researchers are already examining how global warming in changing the habitat ranges of particular animal species. Others are working to track water resources. The World Resources Institute now uses the service to provide a map of deforestation not only in the Amazon, but across the globe.

Learning Gets Deep
At the moment, this map is generated with traditional computing techniques. But according to WIR chief technology officer Aaron Steele, the organization is now building a system that can expand, accelerate, and improve the process with neural networking. At companies like Google and Facebook, “deep learning” has already recognizing faces and objects in online photos uploaded by everyday internet users, and many believe the technology can significantly accelerate the analysis of satellite imagery—and to much greater effect.

“It’s a real breakthrough in computer vision, though in order to make it work, you need a whole lot of compute power,” says Steven Brumby, a former Los Alamos National Lab researcher and co-founder of Descartes Labs. “The opportunity is enormous.”

In a recent months, Google has shown how effective this AI technology can be in analyzing ground-level photos captured as part of its Street View project. In building its online maps, Google once used human editors to identify addresses in photos. Now, neural networking does this automatically.

James Crawford, an ex-Googler and the CEO of Orbital Insights, a company dedicated to examining satellite imagery with neural networks, agrees that Google Earth is ripe for this kind of automatic analysis. But after using Google Earth Engine as an outside developer, he says the service may need to evolve further before it’s suitable to such work. “The level of control we need in our pipeline, it didn’t really work for us,” he says. “But that could change.”

The Cardboard Effect
Elsewhere at Google, an eclectic team of engineers are building a virtual reality system for use with headsets that strap around the eyes. It began with a “20 percent” project called Google Cardboard, a headset made out of cardboard—literally—that works in tandem with your smartphone. In the beginning, this seemed like an ironic comment on the wider push toward virtual reality. But it has evolved into something far more serious.

These engineers have designed 16-lens cameras that can capture 360-degree stereoscopic images of the real world. GoPro will soon offer these cameras to the world at large. And Google is offering a service that can stitch those images into a 360-degree digital environment for viewing in Cardboard and, potentially, other headsets. “We have ambitions beyond just Cardboard,” project leader Clay Bavor told us last month. “There are many other things going on.”

The project is natural fit for Google Earth. The Cardboard team recently unveiled what it calls Google Expeditions, a way for school students to experience distant lands via the headset, and the concept is that much more attractive in the context of the earth as a whole. “You can imagine this as a Google Earth experience,” Askay says.

For Moore, this kind of thing isn’t a novelty. It’s education. “It’s not just for fun and gaming,” she says. “It can give people a more immersive understanding of the planet—places that matter and places that are changing.”
Street View now includes “special collections” that show off places like the Great Barrier Reef. GOOGLE

A New GlobeIt’s also worth remembering that Google now has its own satellite company. Last summer, it acquired Skybox, a startup that uses cube satellites to take more frequent and higher resolution photos from the skies. According to Askay and Moore, Google hasn’t yet incorporated the Skybox imagery into Google Earth. But this will come.

On one level, the prospect of pairing Skybox with technology such as neural networks is a frightening thing—another erosion of privacy in the physical world. But it also brings enormous possibilities. Today, Google Earth is a nice way to look at the planet—not to the mention Mars, the Moon, and the heavens. But in the years to come, it will grow into something else. Virtual reality will bring new fidelity. And AI and other types of data analysis will bring a new understanding our planet.

“What I’m looking forward to is combining Google Earth with the kind of dynamic data coming out of Earth Engine—data on deforestation, floods, temperatures,” Moore says. “If you render that kind of information on Google Earth, it becomes a living, breathing dashboard of the planet. You can put in everyone’s hands, not just charts and graphs of what’s going on, but high-resolution information that’s sitting, almost literally, on the surface of the earth.” It’s like Askay’s work at the James Reserve. But on much larger scale.

June 30th Gets a Leap Second Because Earth's Rotation is Slowing Down

As reported by GizmodoIf you’re the sort of person who lives by the motto that every second counts, next week, you get to put your money where your mouth is. That’s because, as we first learned back in January, we’re all being gifted a leap second on June 30th.

Leap seconds can wreak havoc across the Internet, but, as NASA explained in detail this week, they’re essential in order to compensate for our planet’s slowing rotation.
Most of us live our lives in the steadfast world of coordinated universal time (UTC), where Earth days are treated as precisely 86,400 seconds long. But in the real world, days haven’t been that long since about 1820. That’s because a gravitational tug-of-war between the Earth and the moon is causing our planet’s rotation to slow down, making the days a wee bit longer as the years roll on. Today, the average day is approximately 86,400.002 seconds long.
You might be thinking: Okay, that’s interesting, but who’s really counting? Scientists, of course! As NASA explains in the video below, Earth scientists monitor precisely how long it takes our planet to complete a full rotation (i.e., a day) using a technique called Very Long Baseline Interferometry (VLBI). This essentially involves collating data from a worldwide network of stations every single day. And the results are not always predictable. Day length, it turns out, is influenced by everything from tectonic activity to groundwater to El Nino events.
Because day length as measured by VLBI pretty much never hits 86,400 seconds on the nose, scientists have created a second time standard, Universal Time 1, based on the Earth’s precise rotation. When UT1 and UTC drift too far apart, leap seconds are added to keep the two time scales within 0.9 seconds of each other. That’s why, as the hour approaches midnight on Tuesday, the clock will strike 23:59:60 before rolling over to 0:0:0 on July 1st.
Try to make the most of that extra second, even if the Internet has an aneurysm. After all, not even Earth’s brightest scientists can predict when the next one’s coming.
The prevent any Internet mayhem, Google has been using a technique called 'leap smear' so that there will be no confusion by computer systems over the time during the leap second addition.  Still there may be some issues: the Chinese BeiDou 'GPS' system day numbering could cause some errors for GNSS systems utilizing that location and timing system.

Sunday, June 28, 2015

World’s First 3D Printed Supercar is Unveiled

As reported by 3dPrint: The automobile industry has been relatively stagnant for the past several decades. While new car designs are released annually, and computer technology has advanced by leaps and bounds, the manufacturing processes and the effects that these processes have on our environment have remain relatively unchanged. Over the past decade or so, 3D printing has shown some promise in the manufacturing of automobiles, yet it has not quite lived up to its potential, at least according to Kevin Czinger, founder and CEO of a company called Divergent Microfactories (DM).
dm1
The Blade 3D Printed Supercar
Today, at the O’Reilly Solid Conference in San Francisco, Kevin Czinger is about to shock the world with a keynote presentation he will give titled, “Dematerializing Auto Manufacturing.”
“Divergent Microfactories is going to unveil a supercar that is built based on 3D printed parts,” Manny Vara of LMG PR tells 3DPrint.com. “It is very light and super fast — can you say faster acceleration than a McLaren P1, and 2x the power-to-weight ratio of a Bugatti Veyron? But the car itself is only part of the story. The company is actually trying to completely change how cars are made in order to hugely reduce the amount of materials, power, pollution and cost associated with making traditional cars.”
The vehicle, called the Blade, has 1/3 the emissions of an electric car and 1/50 the factory capital costs of other manufactured cars.  Unlike previous 3D printed vehicles that we have seen, such as Local Motors’ car that they have printed several times, DM’s manufacturing process differs quite a bit. Instead of 3D printing an entire vehicle, they 3D print aluminum ‘nodes’ which act in a similar fashion to Lego blocks. 3D printing allows DM to create elaborate and complex shaped nodes which are then joined together by off-the-shelf carbon fiber tubing. Once the nodes are printed, the chassis of a car can be completely assembled in a matter of minutes by semiskilled workers. The process of constructing the chassis is one which requires much less capital and other resources, and doesn’t require the extremely skilled and trained workers that other car manufacturing techniques rely on. The important goal that DM is striving for, and it appears they have accomplished, is the reduction of pollution and environmental impact.
Individual 3D printed aluminum nodes
Individual 3D printed aluminum nodes
Today, Czinger and the rest of the team at Divergent Microfactories will be unveiling their first prototype car, the Blade.
“Society has made great strides in its awareness and adoption of cleaner and greener cars,” explains CEO Kevin Czinger. “The problem is that while these cars do now exist, the actual manufacturing of them is anything but environmentally friendly. At Divergent Microfactories, we’ve found a way to make automobiles that holds the promise of radically reducing the resource use and pollution generated by manufacturing. It also holds the promise of making large-scale car manufacturing affordable for small teams of innovators. And as Blade proves, we’ve done it without sacrificing style or substance. We’ve developed a sustainable path forward for the car industry that we believe will result in a renaissance in car manufacturing, with innovative, eco-friendly cars like Blade being designed and built in microfactories around the world.”
Assembling of the 3D printed nodes and carbon fiber tubing to construct the chassis
Assembling of the 3D printed nodes and carbon fiber tubing to construct the chassis
The Blade is one heck of a supercar, capable of going from 0-60 MPH in a mere 2.2 seconds. It weighs just 1,400 pounds, and is powered by a 4-cylinder 700-horsepower bi-fuel internal combustion engine that is capable of using either gasoline or compressed natural gas as fuel. The car chassis is made up of approximately 70 3D printed aluminum nodes, and it took only 30 minutes to build the chassis by hand. The chassis itself weighs just 61 pounds.
“The body of the car is composite,” Vara tells us. “One cool thing is that the body itself is not structural, so you could build it out of just about any material, even something like spandex. The important piece, structurally, is the chassis.”
Kevin Czinger, Founder and CEO, Divergent Microfactories, Inc. with the Blade Supercar
Kevin Czinger, Founder and CEO, Divergent Microfactories, Inc. with the Blade Supercar
The initial plan is for DM to scale up to an annual production of 10,000 of these limited supercars, making them available to potential customers. This isn’t all though, as DM doesn’t merely plan on just being satisfied by manufacturing cars via this method. They plan on making the technology available to others as well. On top of selling these supercars, they will also sell the tools and technologies so that small teams of innovators and entrepreneurs can open microfactories and build their own cars, based on their own unique designs. Whether it is a sedan, pickup truck or another type of supercar, it is all possible with this proprietary 3D printed node technology.
Pre-painted Blade supercar
Pre-painted Blade supercar
The node-enabled chassis of cars built using this unique 3D printing method, are up to 90% lighter, much stronger, and more durable than cars built with more traditional techniques. Could we be looking at a great ideology change within the automobile manufacturing industry? Lighter, stronger, more durable, more affordable, environmentally satisfying vehicles are definitely something that just about anyone should consider a step in the right direction.
3D printing has been touted as a technology of the future, for the future, enabling individual customization of many products. Now, the ability for entrepreneurs to enter an industry previously overrun by huge corporations could mean a future with individualized, custom vehicles which perform and appear just the way we want them. If Divergent Microfactories has a say, this will be our future, and that future isn’t too far off.
pre-painted Blade supercar
Pre-painted Blade supercar
What do you think about this 3D printed supercar? Do you like the idea of entrepreneurs having an opportunity to fabricate their own line of vehicles? Is DM onto something with this unique method of automobile manufacturing? Discuss in the Divergent Microfactories 3D Printed Supercar Forum thread on 3DPB.com.  Check out the video below.

blade1

Saturday, June 27, 2015

Virtually Climb El Capitan with Google's First Vertical Street View

As reported by LA Times: Google Maps has announced its first vertical Street View, giving people the opportunity to virtually climb El Capitan in Yosemite National Park.

"People around the world will now be able to virtually experience the unique act of ascending a 3,000-foot cliff by going on a self-directed, vertical climb," the Mountain View, Calif., company said. "Climbers" can make their way up the Nose route and part of the Dawn Wall.

To collect the imagery for Google Maps, the company worked with photographers and partnered with climbers Lynn Hill, Alex Honnold and Tommy Caldwell.

"Yosemite’s driven so much of my life that I’m excited to be able to share it with the world through my eyes," Caldwell said. "These 360-degree panoramic images are the closest thing I’ve ever witnessed to actually being thousands of feet up a vertical rock face -- better than any video or photo. But my hope is that this new imagery will inspire you to get out there and see Yosemite for yourself."


Along the route, and at more than 20 other spots across the broad face of El Capitan, armchair climbers can look around in all directions — down to the floor of Yosemite Valley, up the blank face to the top of the cliff, at the broad view of Yosemite National Park’s other granite monoliths and up close at the pebbled grain and cracks of El Capitan itself.

To check out more of Caldwell's experience working with Google, including the challenges of capturing the difficult ascent up the famous rock face, you can read his blog post here.

For now, vertical Street View is available only for El Capitan, but Google spokeswoman Susan Cadrecha said the company would "continue to try and expand the limits -- and reach new heights -- for Street View moving forward."

It was Russell’s idea to apply Street View to El Capitan, which is a three- or four-hour drive from Google’s Mountain View, Calif., headquarters. As it continues to use Street View to photograph and map many millions of miles of the Earth’s roads, the company is increasingly using the equipment away from populated areas. It has, for example, attached cameras to two rafts floating down the Colorado River through the Grand Canyon.

The New York Times has an interactive feature about the climb as well.



Friday, June 26, 2015

'No hoax': Lexus creates a hoverboard

As reported by USA Today: Toyota's luxury brand Lexus said Wednesday it has created a hoverboard.

Yes, a hoverboard -- as in something that looks like a skateboard without wheels that can hover above the ground like the character Marty McFly rode in theBack to the Future movie trilogy.

"It works. It's not a hoax," says spokesman Moe Durand.

Lexus says it won't be sold. It's for demonstration purposes. It operates using magnetic levitation, with liquid nitrogen-cooled superconductors and permanent magnets that "combine to allow Lexus to create the impossible." It says it is working with the world's leading experts in super-conductive technology.

As cool as that sounds, there are some major limitations. Since it operates magnetically, it only can hover over a steel surface. And it also only works as long as the liquid nitrogen holds out.


The auto brand released pictures of the hoverboard floating above the ground, but no pictures of it in motion or with anyone standing on it. Durand says, however, it can support weight. Limitations aside, the hoverboard is sure to win fans -- maybe even sell a few cars.

"At Lexus, we constantly challenge ourselves and our partners to push the boundaries of what is possible," said Mark Templin, executive vice president for Lexus International, in a statement. "That determination, combined with our passion and expertise for design and innovation, is what led us to take on the Hoverboard project."

Lexus says the board is being tested in Barcelona, Spain. It's the perfect example of the amazing things that can be achieved when you combine technology, design and imagination."

For now, it's mostly about promotion. Lexus tried to give it the look of one of its cars with a design that mimics its bold hourglass-shaped grilles. It says it is made with a range of materials, including bamboo.

Thursday, June 25, 2015

Here’s a Perfect Example of Why We Need More Consumer Drone Regulation

As reported by Robotic Trends: Listen, I’m all for personal freedom and less government infringement on our everyday lives. But the current wildfires sweeping across California are a perfect example, unfortunately, as to why stricter consumer drone regulation might be needed.

In the last week, state and federal firefighters have fought more than 270 wildfires in California alone. One fire burned nearly 18,000 acres of land and cost more than $7 million to contain.

Here’s the problem: firefighters are seeing more unauthorized consumer drones flying over active wildfires. Maybe the drone owners don’t know or maybe they don’t care, but temporary flight restrictions are placed over wildfire areas due to the aircraft used to help contain the fires.

Aircraft is used during wildfires to knock down flames and survey the burn area. When drones are in that restricted airspace, they’re in the path of aircraft.


“A couple times yesterday we noticed some drones in our airspace,” said Cal Fire’s Steve Kaufmann (watch the KSEE video below).“I can’t encourage the public enough that when we have a fire in the area, they really need to try to have restraint.”

“Some of the hobbyist drones will get up in the air, and if their drone is in the air, we actually have to cease our air operations until we can get our arms around that,” he added.

The near-collisions between drones and commercial airplanes was enough to get the Consumer Drone Safety Act introduced, can you imagine the negative effects one crash, or one wildfire spiraling even further out of control, could have on hobbyist drones?

If you need more evidence of how some drone hobbyists are crossing the line, check out the video below, which you might have already seen, of John Thompson flying his drone over an active house fire. Eventually a firefighter sprays the drone (12 minutes into the video) and forces Thompson to fly away. Thompson took to Facebook and accused the firefighters of misconduct and said he would bill them to replace the $2,200 drone.

Thankfully, many commenters said Thompson was in the wrong. Again, I’m all for a laissez-faire system when it comes to flying consumer drones, but if we can’t police ourselves, we’re only asking for stricter regulation.

Wednesday, June 24, 2015

Deus ex Vehiculum: Security in the Age of Computerized Cars

As reported by the EconomistOne ingenious conceit employed to great effect by science-fiction writers is the sentient machine bent on pursuing an inner mission of its own, from HAL in “2001: A Space Odyssey” to V.I.K.I. in the film version of "I Robot". Usually, humanity thwarts the rogue machine in question, but not always. In “Gridiron”, released in 1995, a computer system called Ismael—which controls the heating, lighting, lifts and everything else in a skyscraper in Los Angeles—runs amok and wreaks havoc on its occupants. The story's cataclysmic conclusion involves Ismael instructing the skyscraper’s computer-controlled hydraulic shock-absorbers (installed to damp the swaying caused by earthquakes) to shake the building, literally, to pieces. As it does so, Ismael’s cyber-spirit flees the crumbling tower by e-mailing a copy of its malevolent code to a diaspora of like-minded computers elsewhere in the world.

While vengeful cyber-spirits may not lurk inside today’s buildings or machines, malevolent humans frequently do. Taking control remotely of modern cars, for instance, has become distressingly easy for hackers, given the proliferation of wireless-connected processors now used to run everything from keyless entry and engine ignition to brakes, steering, tire pressure, throttle setting, transmission and anti-collision systems. Today’s vehicles have anything from 20 to 100 electronic control units (ECUs) managing their various electro-mechanical systems. Without adequate protection, the “connected car” can be every bit as vulnerable to attack and subversion as any computer network. 

Were that not worrisome enough, motorists can expect further cyber-mischief once vehicle-to-vehicle (V2V) communication becomes prevalent, and cars are endowed with their own IP addresses and internet connections. Meanwhile, car makers are beginning to offer over-the-air updates, using cellular connections, for patching flaws in their vehicles’ software. This makes it easier for attackers to infiltrate not just the odd vehicle, but thousands of them at a time. BMW recently beamed an over-the-air software update to 2.2 million of its customers’ cars. The potential for fleet-wide cyber-attacks ought to have car-makers seriously concerned.

Nor is it just vehicle theft motorists have to worry about. Car hacking can threaten people’s lives, both in the vehicle and outside it. The mind shudders at the thought of malicious code being inserted remotely into the logic of a self-driving car speeding autonomously down the highway.

This is not science fiction. Land Rover recently demonstrated a smartphone app that lets an owner take wireless control of his machine while up to ten meters (33 feet) away from it. The aim, says Land Rover, is to let off-road drivers maneuver their vehicles safely over dangerous stretches of terrain, or to assist urban motorists trying to back a vehicle out of a tight parking spot. Fine, except that such a level of remote control could also make it easier for thieves to steal parked cars, or terrorists to create chaos on the road. 

What is being done to protect vehicles from cyber-attack? Several recent events have stirred legislators into action. Last summer, for instance, during a meeting of automotive engineers and security experts, a 14-year-old schoolboy showed industry experts how to take control of a car remotely using circuitry he had lashed up overnight with $15 worth of parts bought from Radio Shack the day before. The youngster turned the windscreen wipers on and off, locked and unlocked the doors, engaged the engine-start mechanism, and had the headlamps flash to the beat of a tune on his iPhone. “It was mind-blowing,” recounted Andrew Brown, vice-president and chief technologist at Delphi Automotive, a manufacturer of auto parts.

More recently, Consumer Reports, a publication owned by a consumer advocacy and independent testing center in Yonkers, New York, got an eye-opener during a visit to a National Highway Traffic Safety Administration (NHTSA) laboratory. The publication’s editors were surprised when a technician turned off the engine of a test car they were driving using nothing more than a mobile phone. NHTSA has found ways of tampering remotely with door locks, seat-belt tensioning, instrument panels, brakes, steering mechanisms and engines—all while the test cars were being driven. Since its laboratory visit, Consumer Reports has been urging America's Congress to legislate for the highest possible security standards for car computer systems.

The message seems to be getting through. In recent weeks, the House Committee on Energy and Commerce has questioned all 17 motor manufacturers that sell vehicles in America, as well as NHTSA itself, about their plans to thwart car hackers. For its part, NHTSA has compiled a 40-page report on how best to deal with cyber-threats on the road. The safety agency has shared its findings with car-makers, but has been understandably reluctant to publicize the counter-measures in detail. 

The problem confronting car-makers everywhere is that, as they add ever more ECUs to their vehicles, to provide more features and convenience for motorists, they unwittingly expand the “attack surface” of their on-board systems. In security terms, this attack surface—the exposure a system presents in terms of its reachable and exploitable vulnerabilities—determines the ease, or otherwise, with which hackers can take control of a system.

In a car, the remote attack surface includes such things as the vehicle’s on-board diagnostics, Bluetooth and WiFi ports, telematic devices like GPS navigation and cellular radios, plus radio-frequency chips in remote entry keys, tire pressure sensors and the like that communicate wirelessly with transponders connected to the vehicle’s Controller Area Network (CAN).

By functioning as a common communications bus, the CAN’s two-wire network for transmitting digital messages around a vehicle allows manufacturers to add features and accessories to a vehicle simply by plugging the additional components into the bus, instead of having to run fresh wires or install additional networks. That makes wiring the innards of cars easier and cheaper.

But by multiplexing signals from different devices on the CAN’s common communications channel, it is possible for vulnerabilities associated with an attack surface to talk to components that perform actual driving functions. For instance, it is not far-fetched to imagine an on-board cellular connection (such as GM’s OnStar network) being tricked into allowing hackers to inject malicious code into ECUs managing, say, the steering, braking or engine controls—courtesy of the shared CAN bus.

By far the best study to date of vehicle security is a survey carried out by Charlie Miller, formerly with the National Security Agency and now at Twitter, and Chris Valasek of IOActive, a security services company based in Seattle. The two researchers examined the remote attack surfaces of 20 popular models on American roads. In each case, they traced the network architecture along with all the computer-controlled features of the vehicles involved. In doing so, they were able to draw conclusions about how vulnerable, in principle, the various vehicles were to remote attack.

Of the half dozen attack surfaces Dr Miller and Mr Valasek analysed in detail for each vehicle, the most vulnerable in all instances turned out to be the on-board Bluetooth feature (“a very reliable entry point for attackers”), a car’s cellular radio service (“the holy grail of automotive attack”), and any browser-based internet connection available (“widely understood by attackers”).

The three most hackable vehicles—in terms of how their network architectures permitted attack surfaces to talk to components performing physical actions—were the 2014 Jeep Cherokee, the 2015 Cadillac Escalade and the 2014 Infiniti Q50. This being litigious America, the automakers concerned quickly found themselves in the legal cross-hairs, as owners sought financial compensation for their vehicles’ perceived vulnerabilities.

One conclusion of the study is that, like computer networks, vehicles need layered defenses, so that penetrating to the heart of the system, though not impossible, becomes increasingly tedious and costly for an attacker. Another obvious suggestion is that automotive networks like the CAN bus, along with its local interconnections, should be designed in a way that isolates ECUs that talk to the outside world from those that control critical functions within the vehicle.

Ultimately, of course, cars are going to need some method of detecting cyber-attacks, and to have the means to neutralize them. In one important way, threat detection is easier in cars than in the networks used in offices. On a CAN bus, for instance, only ECUs are engaged in swapping messages with one another; no gullible humans are involved—as they are in offices—to open back doors unwittingly to phishing attacks from cyber-crooks masquerading as colleagues or customers.

That makes it easier to spot anomalies caused by an attacker’s injected code. To capture the attention of a targeted ECU, a bogus message has to be sent at a much higher rate—anything up to 100 times normal—in order to swamp legitimate messages being received by the processor. A simple device that plugs into a car’s diagnostic port can easily detect such exceptional traffic and instruct the CAN bus to ditch it.

That is a good start. But it does not mean cars can be made immune to cyber-attack. There is no such thing as absolute security. As Dr Miller and Mr Valasek note, even firms like Microsoft and Google have been unable to make a web browser that cannot go a few months without needing some critical security patch. Cars are no different. All the more so once they start communicating with one another, as well as with traffic signs and other roadside equipment.