Search This Blog

Wednesday, December 27, 2017

This Cartographer’s Deep Dive into Google Maps is Fascinating

As reported by The Verge: Most people who use Google Maps do so without much attention to detail. We just need the directions, the right subway route, or the name of that good sushi place. We don’t spend too much time pondering how Google got so good at mapping the world, and what decisions and choices were made along the way that have made it the go-to navigational tool of our time.

Justin O’Beirne pays attention to these types of details. He’s a cartographer who helped contribute to Apple Maps. So we should trust him when he explains — in depth — about what makes Google Maps so superior to any other mapping service.

This week, he published a fascinating essay that explains the concept of the “Google Maps’ Moat.” By this, he means the layers of data surrounding Google Maps that basically makes it basically impossible for Apple or any competitor to ever catch up. “Google has gathered so much data, in so many areas, that it’s now crunching it together and creating features that Apple can’t make — surrounding Google Maps with a moat of time,” he writes. “It makes you wonder how long back Google was planning all of this—and what it’s planning next...”

O’Beirne starts out by marveling at the level of detail available in Google Maps for even extremely small towns, such as the one where he grew up in rural Illinois. He highlights how Google, unlike Apple, is able to display the shapes of individual buildings and even smaller structures like tool sheds and mobile homes. These minute details can be found even in towns with populations in the double-digits. He uses this to lament the corresponding lack of detail in Apple Maps.

He charts the history of Google’s efforts to add buildings large and small, highlighting the search giant’s announcement from 2012 that they were “algorithmically created by taking aerial imagery and using computer vision techniques to render the building shapes.” So in addition to getting a first-person street view of your route, you can zoom outward to seeing a computer-rendered model of the surrounding area for contextual information such as the shapes and sizes of buildings.

He concludes that aerial imagery from satellites has outpaced Google’s famous Street View vehicles in the amount of data used to create these vivid tableaus. And he asks an important question: “[H]ow long until Google has every structure on Earth?”

Then things get interesting. O’Beirne introduces us to two researchers, Rachelle Annechino and Yo-Shang Cheng, who observed that people often describe the layout of their city as it relates to “main drags” or “commercial corridors.” He then goes on to describe Google’s unique approach to highlighting these “Areas of Interest” (AOI). About a year ago, these “main drags” began showing up in Google Maps as clusters of orange buildings. Google communicates these “Areas of Interest” to its users through a specific orange shading, but with a level of detail that is truly stunning.

O’Beirne writes (emphasis his):
This suggests that Google took its buildings and crunched them against its places. In other words, Google appears to be creating these orange buildings by matching its building and place datasets together[.]
[...]
So Google seems to be creating AOIs out of its building and place data. But what’s most interesting is that Google’s building and place data are themselves extracted from other Google Maps features.
[...]
In other words, Google’s buildings are byproducts of its Satellite/Aerial imagery. And some of Google’s places are byproducts of its Street View imagery...so this makes AOIs a byproduct of byproducts.
This is bonkers, isn’t it? Google is creating data out of data.
This leads O’Beirne to draw some pretty interesting conclusions. If Google has mapped all the buildings and knows precisely what businesses and points of interest are located within, then perhaps the search giant can install augmented reality windshields in its self-driving cars that tell you everything you need to know about adjacent structures. As you’re driving through a city — or being driven, rather — Google Maps can use its accumulated data to pinpoint buildings where you have an upcoming appointment, for example.

Another possibility is a ride-hailing service more accurate than Uber. An essay by The Verge’s editor-in-chief Nilay Patel makes a cameo in O’Beirne’s story to highlight the difficulty faced by ride-hailing apps like Uber and Lyft in pinpointing exact pickup and drop-off locations. Uber and Lyft drivers already use Google Maps and Google-owned Waze in such high volumes that both ride-hail services threw up their hands and integrated Google’s exceptional navigation tools into their own apps.

O’Beirne fails to mention Google’s own ride-hail ambitions. Waymo, the self-driving division of Google parent Alphabet, is developing its own ride-hail app in anticipation of launching a commercial self-driving mobility service next year. And Waze has been piloting a car-pooling service in California for the past year.

It’s clear that Google has its sights set on the lucrative ride-hailing market. And with a powerful tool like Google Maps in its arsenal, it could have its leg up over its more established players.

Tuesday, December 26, 2017

Elon Musk Shows Off the Tesla Roadster that SpaceX will Send Beyond Mars

As reported by The Verge: Weeks after announcing that he plans to send an original Tesla Roadster to space atop a Falcon Heavy rocket, Elon Musk has released photos of the car being prepped for launch at SpaceX headquarters. The series of photos, posted to Instagram, show the Roadster attached to a fitting and placed between the two halves of the payload fairing that caps the rocket. The photos were posted just hours after a picture leaked on Reddit that showed a grainy view of the car being readied for its final ride.

This will be the inaugural flight of the Falcon Heavy, a rocket that SpaceX has been planning for years. The successor to the Falcon 9 , it’s essentially (and simply put) three boosters strapped together, all of which will add enough thrust to make it the most powerful rocket in the world. It will give SpaceX the ability to send bigger payloads to space while also helping the company push farther out into the Solar System.

But SpaceX doesn’t want to put a valuable payload on the very first flight, which even Musk has admitted could end (or begin) with an explosion. So the company plans to use a “dummy payload” instead. “Test flights of new rockets usually contain mass simulators in the form of concrete or steel blocks. That seemed extremely boring,” Musk wrote on Instagram today. “Of course, anything boring is terrible, especially companies, so we decided to send something unusual, something that made us feel.”

In April, Musk said he was trying to think of the “silliest thing we can imagine” to stick on top that first Falcon Heavy rocket. And on December 1st, we learned exactly what that meant. “Payload will be my midnight cherry Tesla Roadster playing Space Oddity,” Musk wrote on Twitter. “Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn’t blow up on ascent.”


After some back and forth about whether he was joking, it became clear that Musk meant what he wrote. And there’s nothing really standing in his way — as long as the car doesn’t impact Mars, there aren’t really any laws blocking the effort.


Wednesday, December 20, 2017

Elon Musk Shows Off SpaceX’s Almost Fully-Assembled Falcon Heavy Rocket

As reported by The Verge: Elon Musk has tweeted out photos of SpaceX’s almost fully assembled Falcon Heavy rocket in Cape Canaveral, Florida — the biggest and best glimpse so far into what the final iteration will look like. The rocket’s launch date is set for sometime in January and has never gotten this far in development before, so the photos do show something that’s quite promising. From the pictures, the biggest missing pieces look to be the payload and nose cone at the top.

The Falcon Heavy consists of three Falcon 9 cores strapped together and will be mostly reusable, with all three cores intended to return to Earth after launch so they can be used for other missions. Musk has said the rocket’s outer cores for this upcoming launch are previously flown Falcon 9 boosters.
As previously reported, the Falcon Heavy will be one of the most powerful rockets ever made, capable of lofting around 140,000 pounds of cargo into lower Earth orbit. But given all the delays and challenges endured by Falcon Heavy, Musk has understandably set the bar low for success. “I hope it makes it far enough away from the pad that it does not cause pad damage,” said Musk in July. “I would consider even that a win, to be honest.”

Tuesday, December 19, 2017

To Save Lives, Self-Driving Cars Must Become the Ultimate Defensive Drivers

As reported by Futurism: In early November, a self-driving shuttle and a delivery truck collided in Las Vegas. The event, in which no one was injured and no property was seriously damaged, attracted media and public attention in part because one of the vehicles was driving itself – and because that shuttle had been operating for only less than an hour before the crash.

It’s not the first collision involving a self-driving vehicle. Other crashes have involved Ubers in Arizona, a Tesla in “autopilot” mode in Florida and several others in California. But in nearly every case, it was human error, not the self-driving car, that caused the problem.

In Las Vegas, the self-driving shuttle noticed a truck up ahead was backing up, and stopped and waited for it to get out of the shuttle’s way. But the human truck driver didn’t see the shuttle, and kept backing up. As the truck got closer, the shuttle didn’t move – forward or back – so the truck grazed the shuttle’s front bumper.

As a researcher working on autonomous systems for the past decade, I find that this event raises a number of questions: Why didn’t the shuttle honk, or back up to avoid the approaching truck? Was stopping and not moving the safest procedure? If self-driving cars are to make the roads safer, the bigger question is: What should these vehicles do to reduce mishaps? In my lab, we are developing self-driving cars and shuttles. We’d like to solve the underlying safety challenge: Even when autonomous vehicles are doing everything they’re supposed to, the drivers of nearby cars and trucks are still flawed, error-prone humans.

How Crashes Happen
There are two main causes for crashes involving autonomous vehicles. The first source of problems is when the sensors don’t detect what’s happening around the vehicle. Each sensor has its quirks: GPS works only with a clear view of the sky; cameras work with enough light; lidar can’t work in fog; and radar is not particularly accurate. There may not be another sensor with different capabilities to take over. It’s not clear what the ideal set of sensors is for an autonomous vehicle – and, with both cost and computing power as limiting factors, the solution can’t be just adding more and more.

The second major problem happens when the vehicle encounters a situation that the people who wrote its software didn’t plan for – like having a truck driver not see the shuttle and back up into it. Just like human drivers, self-driving systems have to make hundreds of decisions every second, adjusting for new information coming in from the environment. When a self-driving car experiences something it’s not programmed to handle, it typically stops or pulls over to the roadside and waits for the situation to change. The shuttle in Las Vegas was presumably waiting for the truck to get out of the way before proceeding – but the truck kept getting closer. The shuttle may not have been programmed to honk or back up in situations like that – or may not have had room to back up.

The challenge for designers and programmers is combining the information from all the sensors to create an accurate representation – a computerized model – of the space around the vehicle. Then the software can interpret the representation to help the vehicle navigate and interact with whatever might be happening nearby. If the system’s perception isn’t good enough, the vehicle can’t make a good decision. The main cause of the fatal Tesla crash was that the car’s sensors couldn’t tell the difference between the bright sky and a large white truck crossing in front of the car.
If autonomous vehicles are to fulfill humans’ expectations of reducing crashes, it won’t be enough for them to drive safely. They must also be the ultimate defensive driver, ready to react when others nearby drive unsafely. An Uber crash in Tempe, Arizona, in March 2017 is an example of this.

According to media reports, in that incident, a person in a Honda CRV was driving on a major road near the center of Tempe. She wanted to turn left, across three lanes of oncoming traffic. She could see two of the three lanes were clogged with traffic and not moving. She could not see the farthest lane from her, in which an Uber was driving autonomously at 38 mph in a 40 mph zone. The Honda driver made the left turn and hit the Uber car as it entered the intersection.

A human driver in the Uber car approaching an intersection might have expected cars to be turning across its lane. A person might have noticed she couldn’t see if that was happening and slowed down, perhaps avoiding the crash entirely. An autonomous car that’s safer than humans would have done the same – but the Uber wasn’t programmed to.

Improve Testing
That Tempe crash and the more recent Las Vegas one are both examples of a vehicle not understanding the situation enough to determine the correct action. The vehicles were following the rules they’d been given, but they were not making sure their decisions were the safest ones. This is primarily because of the way most autonomous vehicles are tested.

The basic standard, of course, is whether self-driving cars can follow the rules of the road, obeying traffic lights and signs, knowing local laws about signaling lane changes, and otherwise behaving like a law-abiding driver. But that’s only the beginning.


Wednesday, December 6, 2017

Insurance Companies Are Now Offering Discounts if You Let Your Tesla Drive Itself

As reported by Futurism: While accidents have happened, one of the most appealing things about autonomous vehicles is their capacity to make our roads a safer place. Now, insurance companies are starting to offer financial incentives to promote adoption.

Britain’s largest automobile insurance company, Direct Line, has announced a 5 percent discount for customers who activate Autopilot functionality in their Tesla. It follows in the footsteps of Root, a startup that offers a similar promotion across nine states in the US.

It should be noted that Direct Line’s discount shouldn’t be taken as an endorsement of the technology, at least for the time being. The company is encouraging customers to utilize Autopilot so that it can collect data on whether or not it contributes to safer driving, so that insurance premiums can be adjusted as a result.

“At present the driver is firmly in charge so it’s just like insuring other cars, but it does offer Direct Line a great opportunity to learn and prepare for the future,” the company’s head of motor development, Dan Freedman, told Reuters.

Tesla Crash Test
In May 2016, the driver of a Tesla Model S using Autopilot mode was killed when his vehicle collided with an 18-wheeler truck at a highway intersection. However, a subsequent report by the National Highway Traffic Safety Administration (NHTSA) largely exonerated the automaker.

The NHTSA found that the crash rate of Tesla vehicles dropped by nearly 40 percent when Autosteer was activated. Elon Musk has since pledged that future improvements to the Autopilot system will contribute to a 90 percent reduction in accidents.

Data published by the Association for Safe International Road Travel asserts that over 37,000 people die as a result of car accidents every year in the US, with some 2.35 million suffering injuries. Furthermore, there are some 1.3 million deaths related to car accidents worldwide every year. The NHTSA has previously released data that states that almost 95 percent of crashes are caused by drivers.

These figures could be reduced significantly if autonomous driving systems were more widely used. Self-driving cars will be safest when there are no human drivers on the road, because their ability to communicate with one another won’t be subject to the same misunderstandings.

It’s easy to see reduced insurance premiums being used to convince drivers to cede control to their cars on a broader scale. At some point, we might even see the need for individual insurance disappear completely.

When companies are sufficiently confident in their self-driving vehicles, they might take on the responsibility, agreeing to pay any damages in case of an accident. This would likely push the automotive industry toward a model where cars are predominantly leased, rather than owned. Looking further forward, traveling by car might resemble the Tesla-centric autonomous taxi service that’s currently being implemented in Dubai.

Saturday, December 2, 2017

SpaceX will use the first Falcon Heavy to send a Tesla Roadster to Mars, Elon Musk says

As reported by The Verge: Always willing to up the stakes of an already difficult situation, SpaceX CEO Elon Musk has said the first flight of his company’s Falcon Heavy rocket will be used to send a Tesla Roadster into space. Musk first tweeted out the idea on Friday evening, but has since separately confirmed his plans with The Verge.

The first Falcon Heavy’s “payload will be my midnight cherry Tesla Roadster playing Space Oddity,” Musk wrote on Twitter, referencing the famous David Bowie song. “Destination is Mars orbit. Will be in deep space for a billion years or so if it doesn’t blow up on ascent.”

Musk has spoken openly about the non-zero chance that the Falcon Heavy will explode during its first flight, and because of that he once said he wanted stick the “silliest thing we can imagine” on top of the rocket. Now we know what he meant. It’s unclear at the time of publish whether SpaceX has received any necessary approvals for this plan.

Falcon Heavy is the followup to SpaceX’s Falcon 9. It’s a more powerful rocket that the company hopes to use for missions to the Moon and Mars. It was originally supposed to take flight back in 2013 or 2014, but its maiden flight is now pegged for January 2018, according to Musk. (The company has been testing parts of the Falcon Heavy architecture over the last year, and has been busy readying the same launchpad that the Apollo 11 mission blasted off from for this flight.)

Falcon Heavy is, in overly simple terms, three of the company’s Falcon 9 rockets strapped together. It therefore will be capable of creating around three times the thrust of a single Falcon 9 rocket, allowing SpaceX to perform missions beyond low Earth orbit.

SpaceX also ultimately plans to be able to recover all three rocket cores that power the Falcon Heavy, just like it’s done over the last year with main rocket booster stage of its Falcon 9s. It’s unclear if the company will attempt to recover the boosters of this maiden rocket.

Of course, Musk also said earlier this fall at the International Astronautical Congress that he plans to pour all of SpaceX’s resources into an even bigger rocket architecture, known as the Interplanetary Transport System (or Big F**king Rocket, for short).

That new mega-rocket, when built, would essentially obsolesce the Falcon Heavy and the Falcon 9. It will be capable of taking on the same duties that those rockets perform, while adding new capabilities that range from planting a colony on Mars to making 30-minute transcontinental travel possible on Earth.

In that light, maybe shooting a Tesla into orbit around the Red Planet doesn’t seem so outlandish.


Thursday, November 30, 2017

US Supreme Court Considers if Your Privacy Rights Include Location Data

As reported by Engadget: With all the attention focused on the FCC's upcoming vote to dismantle net neutrality protections, it's easy to have missed an upcoming hearing that has the potential to reshape electronic-privacy protection. Today, the Supreme Court is hearing arguments in Carpenter v. United States — and at issue is cellphone-tower location data that law enforcement obtained without a warrant.

Defendant Timothy Carpenter, who was convicted as the mastermind behind two years of armed robberies in Michigan and Ohio, has argued that his location data, as gathered by his cellphone service provider, is covered under the Fourth Amendment, which protects citizens against "unreasonable searches and seizures." Thus far, appeals courts have upheld the initial decision that law enforcement didn't need a warrant to acquire this data, so the Supreme Court is now tasked with determining whether this data is deserving of more-rigorous privacy protection.

This case has been going on for years, so let's get some background details out of the way. Amy Howe, formerly a reporter and editor for SCOTUSblogdescribes how law enforcement asked cellphone service providers for details on 16 phone numbers tied to the crimes, including Carpenter's number and that of a co-defendant. That data included months of cell-site-location information (CSLI) that shows precise GPS coordinates of cellphone towers plus the date and time that a phone tried to connect to the tower in question. The FBI used this to create a map of where the phone and its owner were at any given time. The FBI received multiple months of data, not just data for the days, and was never asked to produce a warrant.

The FBI's explanation, which the courts have thus far backed up, relies on a legal principle known as the third-party doctrine. Jennifer Lynch from the Electronic Frontier Foundation explains that the third-party doctrine states that information you voluntarily share with "someone else" isn't protected by the Fourth Amendment, because third parties aren't legally bound to keep the info you shared with them private. And the definition of "someone else" is quite broad -- in this case, the courts view the data that cellphone providers collect as something customers are voluntarily sharing, simply by using their services.

Carpenter has argued that the third-party doctrine was not intended to be applied to things like cellphones. That's largely because the legal backing of the third-party doctrine is based on two Supreme Court cases from the 1970s, years before the first cellphone even went on sale to the public. Simply put, the way courts are ruling on third-party doctrine doesn't make sense in an age when so much sensitive information is bound up in our cellphones.

There's also a 2012 Supreme Court case that could back up Carpenter's argument. In United States v. Jones, the Supreme Court unanimously ruled that it was a Fourth Amendment violation to attach a GPS unit to a car without a search warrant. The FBI had planted the GPS onto a car parked on private property and used it to track its position every 10 seconds for a full month. That's more granular than the location info you get from a cellphone, but the cases do have some similarities.

After the court's decision, Justice Sonia Sotomayor wrote that the third-party doctrine was "ill suited to the digital age" and expressed her opinion that privacy case law was failing to keep up with the rapid changes that smartphones and other technology are making to how we as a society view privacy. "People disclose the phone numbers that they dial or text to their cellular providers, the URLS that they visit and the e-mail addresses with which they correspond to their Internet service providers, and the books, groceries and medications they purchase to online retailers," she wrote. "I would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection."

Some of the world's biggest tech companies, including Apple, Facebook, Microsoft, Google, Twitter and even Verizon agree with Sotomayor. In August, a total of 15 companies filed an amici curiae brief related to the Carpenter case in which they argue that "fourth amendment doctrine must adapt to the changing realities of the digital era" and that "rigid analog-era rules should yield to consideration of reasonable expectations of privacy in the digital age." Of course, this argument may not win over the Supreme Court, but its ruling in the 2012 GPS case shows that the justices could be in favor of stronger privacy protection.

Unfortunately for those who believe in expanded privacy rights, lower courts have so far sided with the third-party doctrine when it comes to CSLI. Lynch writes that "five federal appellate courts, in deeply divided opinions, have held that historical CSLI isn't protected by the Fourth Amendment -- in large part because the information is collected and stored by third-party service providers." We'll find out soon whether the Supreme Court is ready to break with those past rulings, a move that could lead both to freedom for Timothy Carpenter and a new precedent for privacy in the age of the smartphone.