Search This Blog

Wednesday, June 15, 2016

The AI Dashcam App That Wants to Rate Every Driver in the World

As reported by IEEE SpectrumIf you’ve been out on the streets of Silicon Valley or New York City in the past nine months, there’s a good chance that your bad driving habits have already been profiled by Nexar. This U.S.-Israeli startup is aiming to build what it calls “an air traffic control system” for driving, and has just raised an extra $10.5 million in venture capital financing.
Since Nexar launched its dashcam app last year, smartphones running it have captured, analyzed, and recorded over 5 million miles of driving in San Francisco, New York, and Tel Aviv. The company’s algorithms have now automatically profiled the driving behavior of over 7 million cars, including more than 45 percent of all registered vehicles in the Bay Area, and over 30 percent of those in Manhattan.
Using the smartphone’s camera, machine vision, and AI algorithms, Nexar recognizes the license plates of the vehicles around it, and tracks their location, velocity, and trajectory. If a car speeds past or performs an illegal maneuver like running a red light, that information is added to a profile in Nexar’s online database. When another Nexar user’s phone later detects the same vehicle, it can flash up a warning to give it a wide berth. (This feature will go live later this year.)
Lior Strahilevitz, a law professor at the University of Chicago, proposed a similar (if lower-tech) reputation system for drivers a decade ago. “I think it’s a creative and sensible way to help improve the driving experience,” he says. “There aren’t a lot of legal impediments in the United States to what Nexar is doing, nor should there be.” Eran Shir, Nexar’s co-founder, says, “If you’re driving next to me and you’re a dangerous driver, I want to know about it so I can be prepared.”
Nexar estimates that if 1 percent of drivers use the app daily, it would take just one month to profile 99 percent of a city’s vehicles. “We think that it’s a service to the community to know if you’re a crazy driver or not,” says Shir.
That community includes insurance companies, who Nexar suggests could save billions by cherry-picking only the best drivers to cover. Nexar has calculated that companies using its universal driving score could save $125 a year on each policy. Drivers benefit, too, from video and sensor footage stored in the cloud that they can use to support their side of the story following a collision.

Shir hopes that Nexar will also reduce traffic fatalities long before self-driving cars become mainstream. The app can highlight treacherous intersections, or detect a car braking sharply and send alerts to users several cars back or even around a corner. “This needs to be a real-time network,” says Shir. “We’ve optimized the way that cars communicate so that the latency is very low: about 100 to 150 milliseconds.”
Such targeted warnings require much more precise geolocation than that offered by normal GPS systems, which are typically accurate to within only 5 to 50 meters. Nexar’s app fuses data from multiple sensors in the smartphone. The accelerometer senses potholes and speed bumps, while the magnetometer (used for compass settings) detects when the car is travelling under power lines. “We use these, refreshed fifty times a second, to crowdsource features of the road and pinpoint where you are to within 2 meters,” says Shir. A side benefit is that the company has built detailed maps of road surface quality in its pilot cities.
Shir thinks that Nexar can also help drivers realize the vision of smart, connected highways. “We’re going into a hybrid world where autonomous vehicles and humans will share the road,” says Shir. “We won’t be able to shout at each other or ask someone to move. We need a network that will manage our roads as a scarce resource.”
For the past decade, the automotive industry has been struggling to implement dedicated short range communications (DSRC), a messaging system that lets a car transmit its location, speed, and direction to nearby vehicles and infrastructure. Shir thinks that apps like Nexar could leapfrog the billions of dollars and decades of roll-out time that such a system would likely demand.
“DSRC is dead in the water,” he says. “Instead of sharing information about a single vehicle, where you need a density [of equipped vehicles] of 10 to 20 percent to become effective, you can share the information of all the vehicles around you, and start with 1 percent. It’s a massive force multiplier.”
Over the next year, Nexar plans to launch its network features in 10 more cities, including San Diego; Washington, D.C.; Chicago; and Seattle. It will work towards that that magic 1-percent penetration mark where it could rate almost every driver and detect almost every incident.Although ranking the driving performance of every vehicle in the United States might sounds legally dubious, Lior Strahilevitz says that it is probably legal: “Courts generally say that people generally have little or no expectation of privacy in the movements of their cars on public roads, as long as cars aren’t being tracked everywhere they go for a lengthy period of time.”
Nevertheless, Nexar will face some ethical dilemmas. For example, should the app inform users when it spots a license plate that’s the subject of an Amber Alert? Or contact law enforcement directly if the algorithms suggest that an erratically moving car is being operated by an intoxicated driver?
Although Shir says that Nexar is “not interested in generating more traffic ticket revenue for cities… or becoming the long arm of the FBI,” he admits that law enforcement could subpoena its raw footage and sensor data.
Ultimately, Nexar might succeed because drivers are constantly being rated, whether or not they are running the app themselves. If its algorithms are judging you anyway, you might not want to be the only one in the dark about that accident-prone pick-up in the next lane.

Apple Is Bringing the AI Revolution to Your iPhone

As reported by Wired: YOUR NEXT IPHONE will be even better at guessing what you want to type before you type it. Or so say the technologists at Apple

Let’s say you use the word “play” in a text message. In the latest version of the iOS mobile operating system, “we can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically,” Apple senior vice president Craig Federighi said Monday morning during his keynote at the company’s annual Worldwide Developer Conference.

Like a lot of big tech companies, Apple is deploying deep neural networks, networks of hardware and software that can learn by analyzing vast amounts of data. Specifically, Apple uses “long short-term memory” neural networks, or LSTMs. They can “remember” the beginning of a conversation as they’re reading the end of it, making them better at grasping context.

Google uses a similar method to drive Smart Reply, which suggests responses to email messages. But Apple’s “QuickType”—that’s what the company calls its version—shows that not only is Apple pushing AI onto personal devices, it’s pushing harder than Federighi let on.

Today, on its website, Apple also introduced an application programming interface, or API, that lets outside businesses and coders use a similar breed of neural network. This tool, Basic Neural Network Subroutines, is a “collection of functions that you can use to construct neural networks” on a wide range of Apple operating systems, including iOS as well as OS X (for desktops and laptops), tvOS (for TVs), and watchOS (for watches), according to the documentation. “They’re making it as easy as possible for people to add neural nets to their apps,” says Chris Nicholson, CEO and founder of deep learning startup Skymind.

For now, BNNS looks better at identifying images than understanding natural language. But either way, neural networks don’t typically run on laptops and phones. They run atop computer servers on the other side of the Internet, and then they deliver their results to devices across the wire. (Google just revealed that it has built a specialized chip that executes neural nets inside its data centers before sending the results to your phone). Apple wants coders to build neural nets that work even without a connection back to the ‘net—and that’s unusual. Both Google and IBM have experimented with the idea, but Apple is doing it now.
It might not work. Apple doesn’t provide a way of training the neural net, where it actually learns a task by analyzing data. The new Apple API is just a way of executing the neural net once it’s trained. Coders, Nicholson says, will have to handle that on their own or use pre-trained models from some other source. Plus, no one yet knows how well Apple’s neural nets will run on a tiny device like a phone or a watch. They may need more processing power and battery life than such devices can provide. But that’s all details; one day, neural nets will work on personal devices, and Apple is moving toward that day.

Bummer: SpaceX’s Landing Streak Comes to an End

As reported by The VergeA SpaceX Falcon 9 rocket successfully launched two satellites into orbit this morning, but the company failed to land the vehicle on a floating drone ship at sea afterward.
THE VEHICLE'S LANDING CAUSED A BIT OF DRAMA
The vehicle's landing caused a bit of drama, since SpaceX wasn't sure at first if the vehicle actually made it down in one piece. Once the rocket landed, it shook the drone ship pretty violently, causing the ship's onboard camera to freeze. The last shots of the vehicle before the camera cut out showed the Falcon 9 standing upright on the ship, but there were also some flames around the bottom.
Afterward, a SpaceX employee announced on the company's webcast that the vehicle was indeed lost. "We can say that Falcon 9 was lost in this attempt," said Kate Tice, a process improvement engineer for SpaceX. Later CEO Elon Musk confirmed that the Falcon 9 suffered an RUD, or a rapid unscheduled disassembly. That's Musk speak for an explosion.

Ascent phase & satellites look good, but booster rocket had a RUD on droneship

Later, Musk said that the problem had to do with low thrust in the one of the rocket's three main engines, and that all the engines need to be operating at full capacity to handle this type of landing. He noted that the company is already working on upgrades to the Falcon 9 so that it can handle this type of "thrust shortfall" in the future.
The failure puts an end to SpaceX’s recent landing streak. The company has pulled off successful landings after its past three launches, all of which touched down on the drone ship. So far the company has landed four Falcon 9s in total — three at sea and one on solid ground.
SpaceX will have many more chances to land its rockets again soon. The company will launch a cargo resupply mission to the International Space Station for NASA on July 16th. After that launch, SpaceX will try to land the Falcon 9 on solid ground at Cape Canaveral, Florida — something it hasn’t attempted since its first rocket landing in December. And after that, SpaceX has another satellite launch slated for August.
Meanwhile, the company still has an impressive stockpile of landed rockets in its possession. Right now, SpaceX is keeping its four recovered rockets in a hangar at Launch Complex 39A, a launch site at Kennedy Space Center in Florida that the company leases from NASA. That hangar can only store five Falcon 9 rockets at a time, though. So whenever SpaceX does land its next rocket in Florida, the building will be at full capacity.

Tuesday, June 14, 2016

Elon Musk: People Will Probably Die on the First SpaceX Missions to Mars

As reported by IBTimes: Technology entrepreneur Elon Musk is really excited about getting the first humans to land on Mars in 2025 with the view to establishing a colony, but in case you didn't realize this already, he is warning that pioneering a new planet probably won't be much fun.

"It's dangerous and probably people will die – and they'll know that. And then they'll pave the way, and ultimately it will be very safe to go to Mars, and it will be very comfortable. But that will be many years in the future," Musk told the Washington Post in a new interview detailing how the Mission to Mars technical journey is likely to evolve.

Musk's space transportation company SpaceX currently has a $1.6bn contract with Nasa to routinely ferry cargo to and from the International Space Station (ISS). In November 2015, SpaceX received official approval from NASA to send astronauts from the US space agency to the ISS starting from 2017, as currently the only way into space is via Russia.

SpaceX plans to start flying unmanned spacecraft to Mars from 2018 that are timed to occur every two years when Earth and the Red Planet are closest in orbit. The purpose of these missions will be to gather valuable data about descending and landing on Mars for human missions in the future.

There is currently a great deal of interest in the Mission to Mars and organisations like Dutch-based Mars One have galvanized the general public to apply to be the first humans on Mars. The likelihood of this being possible, however, without backing from NASA and the European Space Agency (ESA) is really slim, and some think that Mars One could just be a big scam.

"Essentially what we're saying is we're establishing a cargo route to Mars. It's a regular cargo route. You can count on it. It's going to happen every 26 months. Like a train leaving the station," he said.

"And if scientists around the world know that they can count on that, and it's going to be inexpensive, relatively speaking compared to anything in the past, then they will plan accordingly and come up with a lot of great experiments."

If these autonomous spacecraft flights are successful and are proven to be safe enough for humans, then the first human mission will take place in 2025. However, even when the two planets are at their closest, they are still separated by a distance of 140 million miles and it will take months for the spacecraft to reach Mars.

For the first pioneering humans who decide to leave their lives on Earth behind, Musk admits the journey will likely be "hard, risky, dangerous, difficult" but he points out it is no different to the British who chose to travel across the sea to colonize the Americas in the 1600s.

"Just as with the establishment of the English colonies, there are people who love that. They want to be the pioneers," he said.

Friday, June 10, 2016

V2X - Qualcomm’s Connected Car Reference Platform Aims to Connect Smart Cars to Everything

As reported by NetworkWorld: With 200 to 300 microcontrollers and microprocessors in the typical automobile, cars are already pretty smart. And Google’s and Tesla’s continued development, as well as auto manufacturers’ R&D investments in preparation of autonomous cars, indicate cars are about to get much smarter.

That increased intelligence means vehicles will have more silicon devices that are more integrated, with more densely packed circuitry. Functional modules, such as control systems, infotainment, and autonomous steering and braking, multiply the number of chips per car that semiconductor manufacturers can sell into each car.

To fill the gap between the connectivity capabilities of today’s cars and the complex connectivity in next-generation cars, Qualcomm today announced its Connected Car Reference Platform intended for the car industry to use to build prototypes of the next-generation connected car. Every category from economy to luxury car will be much smarter than the connected luxury car of today, creating a big opportunity for Qualcomm to supply semiconductors to automakers and suppliers.

Connected cars require faster, more-complex connectivity

Connectivity becomes more complex as infotainment experiences become richer and cars become semi-autonomous cars like the Tesla S or fully autonomous like Google’s vehicle. Frank Fitzek, chief of Germany’s 5G Lab, explained to me in February how autonomous cars will need ultra-low-latency, fast 5G network connectivity.

Connected car network speeds will have to get faster because consumer expectations for connectivity in the autonomous era will be the same in a car as at home. Passengers will connect mobile devices with one another and infotainment systems to collaboratively work, play games, cast streamed music and video to car stereos and displays, as well as communicate with the world beyond the car interior.

If this sounds futuristic, go rent or borrow a 2016 model luxury car from Audi, Honda, or Mercedes or a Tesla S and you will experience excellent connectivity and smartphone integration. Connectivity and options in the next generation will be substantially better.

Autonomous steering and collision avoidance features were not announced. Onboard specialized processors, in addition to the capabilities announced today, will be necessary for autonomous driving. It’s not difficult to imagine that Qualcomm will apply its machine learning SDK, announced just a few weeks ago, and the Snapdragon 820 processor to meet those needs.

Collision avoidance, though, requires a lot of communications with onboard car sensors and cameras—and with a local cloud of Wi-Fi and V2X. V2X, sometimes referred to as vehicle-to-everything, incorporates V2I (Vehicle to Infrastructure), V2V (Vehicle to Vehicle), V2P (Vehicle to Pedestrian), V2D (Vehicle to Device) and V2G (Vehicle to Grid). Much of the collision avoidance systems will operate using a local cloud, but safely coordinating cars in heavy traffic travelling at 70 mph or on the Autobahn at 120 mph will require ultra-low latency, fast 5G.

Features of the Connected Car Reference Platform

Qualcomm described the following features of the Connected Car Reference Platform in its release:
  • Scalability: Using a common framework that scales from a basic telematics control unit (TCU) up to a highly integrated wireless gateway, connecting multiple electronic control units (ECUs) within the car and supporting critical functions, such as over-the-air software upgrades and data collection and analytics.
  • Future-proofing: Allowing the vehicle’s connectivity hardware and software to be upgraded through its life cycle, providing automakers with a migration path from Dedicated Short Range Communications (DSRC) to hybrid/cellular V2X and from 4G LTE to 5G.
  • Wireless coexistence: Managing concurrent operation of multiple wireless technologies using the same spectrum frequencies, such as Wi-Fi, Bluetooth and Bluetooth Low Energy.
  • OEM and third-party applications support: Providing a secure framework for the development and execution of custom applications.
There are a few interesting points about those features. Qualcomm is attempting to solve a difficult problem for automakers: over-the-air software updates. Updating software on a mission-critical system such as an autonomous car is a much harder problem than updating a smartphone because it has to be completely secure and work every time without reducing safety. But Qualcomm has to solve this problem anyway to accelerate shipments not only to the car market but to the IoT market, where it hopes to sell tens of billions of chips.

Keeping up with connectivity improvements

One of the inconsistencies between building cars and building smartphones is the average car has a 12-year useful life, and a smartphone has just a couple of years. Smartphone connectivity improves with each design iteration, posing the problem that its network speeds will almost always be faster than what is installed in the car. Unless the car network is future-proofed, consumers will rely on their phone’s network rather than the car’s. Qualcomm said there will be a migration from older networks to newer, perhaps offering an upgrade to car network connectivity every two years to match the improvements in smartphones.

Qualcomm is approaching a unified communications system to address infotainment, navigation, autonomous steering and braking, and control systems connected to the control area network (CAN). Autonomous steering and braking, navigation, and control systems must be connected, but automakers have resisted combining the CAN bus with infotainment systems because it increases the attack surface that could be exploited by a criminal hacker. Qualcomm claims their design is secure, but it can expect to be asked by safety engineers to prove it.

Qualcomm says it expects to ship the Connected Car Reference Platform to automakers, tier 1 auto suppliers and developers late this year.

Tesla Knows When a Crash is Your Fault

As reported by Washington PostEvery day, our cars are becoming smarter and more connected. This may someday save your life in a crash, or prevent one altogether — but it also makes it far harder to evade blame when you're the cause of a fender-bender.

One Tesla owner appears to be finding that out firsthand as he struggles to convince the luxury automaker his wife wasn't the one who crashed his Model X. Instead, he complains, the car suddenly accelerated all by itself, jumped the curb and rammed straight into the side of a shopping center.

Tesla is disputing the owner's account of the incident, citing detailed diagnostic logs that show the car's gas pedal suddenly being pressed to the floor in the moments before the collision.
"Consistent with the driver's actions, the vehicle applied torque and accelerated as instructed," Tesla said in a press statement.
At no time did the driver have Tesla's autopilot or cruise control engaged, according to Tesla, which means the car was under manual control — it couldn't have been anyone else but the human who caused the crash. The car uses multiple sensors to double check a driver's accelerator commands.
The Model X owner appears to be standing by his story, but here's the broader takeaway. Cars have reached a level of sophistication in which they can tattle on their own owners, simply by handing over the secrets embedded in the data they already collect about your driving.
Your driving data is extremely powerful: It can tell your mechanic exactly what parts need work. It offers hints about your commute and your lifestyle. And it can help keep you safe, when combined with features such as automatic lane-keeping and crash avoidance systems.
But the potential dark side is that the data can be abused. Maybe a rogue insurance company might look at it and try to raise your premiums. Perhaps it gives automakers an incentive to claim that you, the owner, were at fault for a crash even if you think you weren't. To be clear, that isn't necessarily what's going on with Tesla's Model X owner. But the case offers a window into the kind of issues that drivers will increasingly face as their vehicles become smarter.

Thursday, June 9, 2016

This Deep Space Atomic Clock Is Key for Future Exploration

As reported by TimeWe all intuitively understand the basics of time. Every day we count its passage and use it to schedule our lives.
We also use time to navigate our way to the destinations that matter to us. In school we learned that speed and time will tell us how far we went in traveling from point A to point B; with a map we can pick the most efficient route – simple.
But what if point A is the Earth, and point B is Mars – is it still that simple? Conceptually, yes. But to actually do it we need better tools – much better tools.
At NASA’s Jet Propulsion Laboratory, I’m working to develop one of these tools: the Deep Space Atomic Clock, or DSAC for short. DSAC is a small atomic clock that could be used as part of a spacecraft navigation system. It will improve accuracy and enable new modes of navigation, such as unattended or autonomous.
In its final form, the Deep Space Atomic Clock will be suitable for operations in the solar system well beyond Earth orbit. Our goal is to develop an advanced prototype of DSAC and operate it in space for one year, demonstrating its use for future deep space exploration.
Speed and time tell us distanceTo navigate in deep space, we measure the transit time of a radio signal traveling back and forth between a spacecraft and one of our transmitting antennae on Earth (usually one of NASA’s Deep Space Network complexes located in Goldstone, California; Madrid, Spain; or Canberra, Australia).
We know the signal is traveling at the speed of light, a constant at approximately 300,000 km/sec (186,000 miles/sec). Then, from how long our “two-way” measurement takes to go there and back, we can compute distances and relative speeds for the spacecraft.
For instance, an orbiting satellite at Mars is an average of 250 million kilometers from Earth. The time the radio signal takes to travel there and back (called its two-way light time) is about 28 minutes. We can measure the travel time of the signal and then relate it to the total distance traversed between the Earth tracking antenna and the orbiter to better than a meter, and the orbiter’s relative speed with respect to the antenna to within 0.1 mm/sec.
We collect the distance and relative speed data over time, and when we have a sufficient amount (for a Mars orbiter this is typically two days) we can determine the satellite’s trajectory.
Measuring time, way beyond Swiss precisionFundamental to these precise measurements are atomic clocks. By measuring very stable and precise frequencies of light emitted by certain atoms (examples include hydrogen, cesium, rubidium and, for DSAC, mercury), an atomic clock can regulate the time kept by a more traditional mechanical (quartz crystal) clock. It’s like a tuning fork for timekeeping. The result is a clock system that can be ultra stable over decades.
The precision of the Deep Space Atomic Clock relies on an inherent property of mercury ions – they transition between neighboring energy levels at a frequency of exactly 40.5073479968 GHz. DSAC uses this property to measure the error in a quartz clock’s “tick rate,” and, with this measurement, “steers” it towards a stable rate. DSAC’s resulting stability is on par with ground-based atomic clocks, gaining or losing less than a microsecond per decade.
Continuing with the Mars orbiter example, ground-based atomic clocks at the Deep Space Network error contribution to the orbiter’s two-way light time measurement is on the order of picoseconds, contributing only fractions of a meter to the overall distance error. Likewise, the clocks’ contribution to error in the orbiter’s speed measurement is a minuscule fraction of the overall error (1 micrometer/sec out of the 0.1 mm/sec total).
The distance and speed measurements are collected by the ground stations and sent to teams of navigators who process the data using sophisticated computer models of spacecraft motion. They compute a best-fit trajectory that, for a Mars orbiter, is typically accurate to within 10 meters (about the length of a school bus).
The ground clocks used for these measurements are the size of a refrigerator and operate in carefully controlled environments – definitely not suitable for spaceflight. In comparison, DSAC, even in its current prototype form as seen above, is about the size of a four-slice toaster. By design, it’s able to operate well in the dynamic environment aboard a deep-space exploring craft.
One key to reducing DSAC’s overall size was miniaturizing the mercury ion trap. Shown in the prior figure, it’s about 15 cm (6 inches) in length. The trap confines the plasma of mercury ions using electric fields. Then, by applying magnetic fields and external shielding, we provide a stable environment where the ions are minimally affected by temperature or magnetic variations. This stable environment enables measuring the ions’ transition between energy states very accurately.
The DSAC technology doesn’t really consume anything other than power. All these features together mean we can develop a clock that’s suitable for very long duration space missions.
Because DSAC is as stable as its ground counterparts, spacecraft carrying DSAC would not need to turn signals around to get two-way tracking. Instead, the spacecraft could send the tracking signal to the Earth station or it could receive the signal sent by the Earth station and make the tracking measurement on board. In other words, traditional two-way tracking can be replaced with one-way, measured either on the ground or on board the spacecraft.
So what does this mean for deep space navigation? Broadly speaking, one-way tracking is more flexible, scalable (since it could support more missions without building new antennas) and enables new ways to navigate.
DSAC advances us beyond what’s possible todayThe Deep Space Atomic Clock has the potential to solve a bunch of our current space navigation challenges.
  • Places like Mars are “crowded” with many spacecraft: Right now, there are five orbiters competing for radio tracking. Two-way tracking requires spacecraft to “time-share” the resource. But with one-way tracking, the Deep Space Network could support many spacecraft simultaneously without expanding the network. All that’s needed are capable spacecraft radios coupled with DSAC.
  • With the existing Deep Space Network, one-way tracking can be conducted at a higher-frequency band than current two-way. Doing so improves the precision of the tracking data by upwards of 10 times, producing range rate measurements with only 0.01 mm/sec error.
  • One-way uplink transmissions from the Deep Space Network are very high-powered. They can be received by smaller spacecraft antennas with greater fields of view than the typical high-gain, focused antennas used today for two-way tracking. This change allows the mission to conduct science and exploration activities without interruption while still collecting high-precision data for navigation and science. As an example, use of one-way data with DSAC to determine the gravity field of Europa, an icy moon of Jupiter, can be achieved in a third of the time it would take using traditional two-way methods with the flyby mission currently under development by NASA.
  • Collecting high-precision one-way data on board a spacecraft means the data are available for real-time navigation. Unlike two-way tracking, there is no delay with ground-based data collection and processing. This type of navigation could be crucial for robotic exploration; it would improve accuracy and reliability during critical events – for example, when a spacecraft inserts into orbit around a planet. It’s also important for human exploration, when astronauts will need accurate real-time trajectory information to safely navigate to distant solar system destinations.
Countdown to DSAC launchThe DSAC mission is a hosted payload on the Surrey Satellite Technology Orbital Test Bed spacecraft. Together with the DSAC Demonstration Unit, an ultra stable quartz oscillator and a GPS receiver with antenna will enter low altitude Earth orbit once launched via a SpaceX Falcon Heavy rocket in early 2017.
While it’s on orbit, DSAC’s space-based performance will be measured in a yearlong demonstration, during which Global Positioning System tracking data will be used to determine precise estimates of OTB’s orbit and DSAC’s stability. We’ll also be running a carefully designed experiment to confirm DSAC-based orbit estimates are as accurate or better than those determined from traditional two-way data. This is how we’ll validate DSAC’s utility for deep space one-way radio navigation.
In the late 1700s, navigating the high seas was forever changed by John Harrison’s development of the H4 “sea watch.” H4’s stability enabled seafarers to accurately and reliably determine longitude, which until then had eluded mariners for thousands of years. Today, exploring deep space requires traveling distances that are orders of magnitude greater than the lengths of oceans, and demands tools with ever more precision for safe navigation. DSAC is at the ready to respond to this challenge.The Conversation