Search This Blog

Friday, December 29, 2017

China's Shenzhen City Electrifies all 16,359 of its Public Buses

As reported by Engadget: If you ever see anyone crowing about how nobody's taking the initiative on sustainable transport, point them in the direction of Shenzhen. The Chinese city has announced that it has successfully electrified its entire fleet of public buses, all 16,359 of them. In addition, more than half of Shenzhen's fabs now run on electricity, and the plan is to get rid of the remaining gas-powered rides by 2020.

Of course, it's not as simple as just dumping more than 16,000 diesel-powered buses in a lake and hoping for the best. There was also the matter of building out 510 charging stations and an additional 8,000 charging poles across the city. According to EyeShenzhen, these poles can re-juice a bus from dry in two hours, serving up to 300 vehicles each day.

As much as this is good for the environment more generally, there are already some more tangible benefits for local authorities. China is notorious for its smog problems, but the fleet of electric buses serves to avoid releasing around 1.35 million tons of CO2 into the local atmosphere each year. Then there's the fact that the city is much quieter now, because the buses themselves aren't as noisy.

And then there's the overall cost savings, since the vehicles use nearly 75 percent less energy than their fossil-fuel powered equivalents. Yes, it took around $490 million in subsidy to get the program started, but that's a small price to pay for cleaner air, quieter cities and a huge boost to the renewables world.


Thursday, December 28, 2017

An Electric Pickup Truck Will Be Tesla’s Top Priority After the Model Y

For the past five years, Elon Musk has been mulling the prospect of an electric
pickup truck.  Now, he's confirmed that the idea is set to become a reality once work
on the Model Y has been completed.
As reported by Futurism: Elon Musk has once again promised that Tesla will bring an electric pickup truck to market. Apparently, the automaker plans to focus on the vehicle as soon as work on the as-yet-unrevealed Model Y has been completed.

In a tweet about the prospect of an electric pickup truck, Musk states that he’s been thinking about the project for nearly five years. This checks out — he mentioned the idea at an event in November 2013 and was talking about it again as recently as April 2017.
Musk showed off an early concept image of the Tesla pickup when the company revealed its electric semi in November 2017. The vehicle is expected to be based in part on the design of the 18-wheeler.

When he was first discussing the vehicle, Musk said that one thing about current pickup trucks that he would like to improve upon is how they handle in the absence of a heavy load. He observed that many trucks don’t handle well with an empty bed, and suggested that a well-placed battery pack could give the vehicle a more advantageous center of gravity.
Of course, the biggest game changer is the prospect of a pickup truck that doesn’t use gasoline or diesel. Pickups are typically much less fuel efficient than standard cars, so an electric version could be a major boon for the environment, given the popularity of these vehicles. In 2016, the Ford F-150, the Dodge Ram, and the Chevrolet Silverado were the three top-selling vehicles in the United States.

While Tesla’s pickup is in the pipeline, there’s no clear schedule of when it will be made available. For the time being, less well-known companies like Workhorse Group and VIA Motors are already offering electric trucks that feature traditional engines as a backup. Meanwhile, Ford is considering an all-electric F-150 as part of its increased commitment to electric vehicles.



Wednesday, December 27, 2017

This Cartographer’s Deep Dive into Google Maps is Fascinating

As reported by The Verge: Most people who use Google Maps do so without much attention to detail. We just need the directions, the right subway route, or the name of that good sushi place. We don’t spend too much time pondering how Google got so good at mapping the world, and what decisions and choices were made along the way that have made it the go-to navigational tool of our time.

Justin O’Beirne pays attention to these types of details. He’s a cartographer who helped contribute to Apple Maps. So we should trust him when he explains — in depth — about what makes Google Maps so superior to any other mapping service.

This week, he published a fascinating essay that explains the concept of the “Google Maps’ Moat.” By this, he means the layers of data surrounding Google Maps that basically makes it basically impossible for Apple or any competitor to ever catch up. “Google has gathered so much data, in so many areas, that it’s now crunching it together and creating features that Apple can’t make — surrounding Google Maps with a moat of time,” he writes. “It makes you wonder how long back Google was planning all of this—and what it’s planning next...”

O’Beirne starts out by marveling at the level of detail available in Google Maps for even extremely small towns, such as the one where he grew up in rural Illinois. He highlights how Google, unlike Apple, is able to display the shapes of individual buildings and even smaller structures like tool sheds and mobile homes. These minute details can be found even in towns with populations in the double-digits. He uses this to lament the corresponding lack of detail in Apple Maps.

He charts the history of Google’s efforts to add buildings large and small, highlighting the search giant’s announcement from 2012 that they were “algorithmically created by taking aerial imagery and using computer vision techniques to render the building shapes.” So in addition to getting a first-person street view of your route, you can zoom outward to seeing a computer-rendered model of the surrounding area for contextual information such as the shapes and sizes of buildings.

He concludes that aerial imagery from satellites has outpaced Google’s famous Street View vehicles in the amount of data used to create these vivid tableaus. And he asks an important question: “[H]ow long until Google has every structure on Earth?”

Then things get interesting. O’Beirne introduces us to two researchers, Rachelle Annechino and Yo-Shang Cheng, who observed that people often describe the layout of their city as it relates to “main drags” or “commercial corridors.” He then goes on to describe Google’s unique approach to highlighting these “Areas of Interest” (AOI). About a year ago, these “main drags” began showing up in Google Maps as clusters of orange buildings. Google communicates these “Areas of Interest” to its users through a specific orange shading, but with a level of detail that is truly stunning.

O’Beirne writes (emphasis his):
This suggests that Google took its buildings and crunched them against its places. In other words, Google appears to be creating these orange buildings by matching its building and place datasets together[.]
[...]
So Google seems to be creating AOIs out of its building and place data. But what’s most interesting is that Google’s building and place data are themselves extracted from other Google Maps features.
[...]
In other words, Google’s buildings are byproducts of its Satellite/Aerial imagery. And some of Google’s places are byproducts of its Street View imagery...so this makes AOIs a byproduct of byproducts.
This is bonkers, isn’t it? Google is creating data out of data.
This leads O’Beirne to draw some pretty interesting conclusions. If Google has mapped all the buildings and knows precisely what businesses and points of interest are located within, then perhaps the search giant can install augmented reality windshields in its self-driving cars that tell you everything you need to know about adjacent structures. As you’re driving through a city — or being driven, rather — Google Maps can use its accumulated data to pinpoint buildings where you have an upcoming appointment, for example.

Another possibility is a ride-hailing service more accurate than Uber. An essay by The Verge’s editor-in-chief Nilay Patel makes a cameo in O’Beirne’s story to highlight the difficulty faced by ride-hailing apps like Uber and Lyft in pinpointing exact pickup and drop-off locations. Uber and Lyft drivers already use Google Maps and Google-owned Waze in such high volumes that both ride-hail services threw up their hands and integrated Google’s exceptional navigation tools into their own apps.

O’Beirne fails to mention Google’s own ride-hail ambitions. Waymo, the self-driving division of Google parent Alphabet, is developing its own ride-hail app in anticipation of launching a commercial self-driving mobility service next year. And Waze has been piloting a car-pooling service in California for the past year.

It’s clear that Google has its sights set on the lucrative ride-hailing market. And with a powerful tool like Google Maps in its arsenal, it could have its leg up over its more established players.

Tuesday, December 26, 2017

Elon Musk Shows Off the Tesla Roadster that SpaceX will Send Beyond Mars

As reported by The Verge: Weeks after announcing that he plans to send an original Tesla Roadster to space atop a Falcon Heavy rocket, Elon Musk has released photos of the car being prepped for launch at SpaceX headquarters. The series of photos, posted to Instagram, show the Roadster attached to a fitting and placed between the two halves of the payload fairing that caps the rocket. The photos were posted just hours after a picture leaked on Reddit that showed a grainy view of the car being readied for its final ride.

This will be the inaugural flight of the Falcon Heavy, a rocket that SpaceX has been planning for years. The successor to the Falcon 9 , it’s essentially (and simply put) three boosters strapped together, all of which will add enough thrust to make it the most powerful rocket in the world. It will give SpaceX the ability to send bigger payloads to space while also helping the company push farther out into the Solar System.

But SpaceX doesn’t want to put a valuable payload on the very first flight, which even Musk has admitted could end (or begin) with an explosion. So the company plans to use a “dummy payload” instead. “Test flights of new rockets usually contain mass simulators in the form of concrete or steel blocks. That seemed extremely boring,” Musk wrote on Instagram today. “Of course, anything boring is terrible, especially companies, so we decided to send something unusual, something that made us feel.”

In April, Musk said he was trying to think of the “silliest thing we can imagine” to stick on top that first Falcon Heavy rocket. And on December 1st, we learned exactly what that meant. “Payload will be my midnight cherry Tesla Roadster playing Space Oddity,” Musk wrote on Twitter. “Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn’t blow up on ascent.”


After some back and forth about whether he was joking, it became clear that Musk meant what he wrote. And there’s nothing really standing in his way — as long as the car doesn’t impact Mars, there aren’t really any laws blocking the effort.


Wednesday, December 20, 2017

Elon Musk Shows Off SpaceX’s Almost Fully-Assembled Falcon Heavy Rocket

As reported by The Verge: Elon Musk has tweeted out photos of SpaceX’s almost fully assembled Falcon Heavy rocket in Cape Canaveral, Florida — the biggest and best glimpse so far into what the final iteration will look like. The rocket’s launch date is set for sometime in January and has never gotten this far in development before, so the photos do show something that’s quite promising. From the pictures, the biggest missing pieces look to be the payload and nose cone at the top.

The Falcon Heavy consists of three Falcon 9 cores strapped together and will be mostly reusable, with all three cores intended to return to Earth after launch so they can be used for other missions. Musk has said the rocket’s outer cores for this upcoming launch are previously flown Falcon 9 boosters.
As previously reported, the Falcon Heavy will be one of the most powerful rockets ever made, capable of lofting around 140,000 pounds of cargo into lower Earth orbit. But given all the delays and challenges endured by Falcon Heavy, Musk has understandably set the bar low for success. “I hope it makes it far enough away from the pad that it does not cause pad damage,” said Musk in July. “I would consider even that a win, to be honest.”

Tuesday, December 19, 2017

To Save Lives, Self-Driving Cars Must Become the Ultimate Defensive Drivers

As reported by Futurism: In early November, a self-driving shuttle and a delivery truck collided in Las Vegas. The event, in which no one was injured and no property was seriously damaged, attracted media and public attention in part because one of the vehicles was driving itself – and because that shuttle had been operating for only less than an hour before the crash.

It’s not the first collision involving a self-driving vehicle. Other crashes have involved Ubers in Arizona, a Tesla in “autopilot” mode in Florida and several others in California. But in nearly every case, it was human error, not the self-driving car, that caused the problem.

In Las Vegas, the self-driving shuttle noticed a truck up ahead was backing up, and stopped and waited for it to get out of the shuttle’s way. But the human truck driver didn’t see the shuttle, and kept backing up. As the truck got closer, the shuttle didn’t move – forward or back – so the truck grazed the shuttle’s front bumper.

As a researcher working on autonomous systems for the past decade, I find that this event raises a number of questions: Why didn’t the shuttle honk, or back up to avoid the approaching truck? Was stopping and not moving the safest procedure? If self-driving cars are to make the roads safer, the bigger question is: What should these vehicles do to reduce mishaps? In my lab, we are developing self-driving cars and shuttles. We’d like to solve the underlying safety challenge: Even when autonomous vehicles are doing everything they’re supposed to, the drivers of nearby cars and trucks are still flawed, error-prone humans.

How Crashes Happen
There are two main causes for crashes involving autonomous vehicles. The first source of problems is when the sensors don’t detect what’s happening around the vehicle. Each sensor has its quirks: GPS works only with a clear view of the sky; cameras work with enough light; lidar can’t work in fog; and radar is not particularly accurate. There may not be another sensor with different capabilities to take over. It’s not clear what the ideal set of sensors is for an autonomous vehicle – and, with both cost and computing power as limiting factors, the solution can’t be just adding more and more.

The second major problem happens when the vehicle encounters a situation that the people who wrote its software didn’t plan for – like having a truck driver not see the shuttle and back up into it. Just like human drivers, self-driving systems have to make hundreds of decisions every second, adjusting for new information coming in from the environment. When a self-driving car experiences something it’s not programmed to handle, it typically stops or pulls over to the roadside and waits for the situation to change. The shuttle in Las Vegas was presumably waiting for the truck to get out of the way before proceeding – but the truck kept getting closer. The shuttle may not have been programmed to honk or back up in situations like that – or may not have had room to back up.

The challenge for designers and programmers is combining the information from all the sensors to create an accurate representation – a computerized model – of the space around the vehicle. Then the software can interpret the representation to help the vehicle navigate and interact with whatever might be happening nearby. If the system’s perception isn’t good enough, the vehicle can’t make a good decision. The main cause of the fatal Tesla crash was that the car’s sensors couldn’t tell the difference between the bright sky and a large white truck crossing in front of the car.
If autonomous vehicles are to fulfill humans’ expectations of reducing crashes, it won’t be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely. An Uber crash in Tempe, Arizona, in March 2017 is an example of this.

According to media reports, in that incident, a person in a Honda CRV was driving on a major road near the center of Tempe. She wanted to turn left, across three lanes of oncoming traffic. She could see two of the three lanes were clogged with traffic and not moving. She could not see the farthest lane from her, in which an Uber was driving autonomously at 38 mph in a 40 mph zone. The Honda driver made the left turn and hit the Uber car as it entered the intersection.

A human driver in the Uber car approaching an intersection might have expected cars to be turning across its lane. A person might have noticed she couldn’t see if that was happening and slowed down, perhaps avoiding the crash entirely. An autonomous car that’s safer than humans would have done the same – but the Uber wasn’t programmed to.

Improve Testing
That Tempe crash and the more recent Las Vegas one are both examples of a vehicle not understanding the situation enough to determine the correct action. The vehicles were following the rules they’d been given, but they were not making sure their decisions were the safest ones. This is primarily because of the way most autonomous vehicles are tested.

The basic standard, of course, is whether self-driving cars can follow the rules of the road, obeying traffic lights and signs, knowing local laws about signaling lane changes, and otherwise behaving like a law-abiding driver. But that’s only the beginning.