“I’m an idiot, someone stop me!”

An amateur Phantom II operator (or, perhaps we should say, former operator) did something stupid that could have hurt someone, and now laments the fact that what he was doing isn’t illegal:

The thing is, there are basically zero regulations in the U.S. preventing what I did from happening again. There is no age requirement or learner’s permit necessary to purchase a drone. There are some basic rules in place from the FAA that ban hobbyists from flying over densely populated areas or close to airports, but aside from that, if you stay under 400 feet, you’re good to go.

When it comes to commercial drone flights, on the other hand, the FAA has made them completely illegal in the US. It’s taken years to develop new rules for companies, during which time other countries have forged ahead. And now it’s saying it will miss the deadline set by Congress to get commercial drones flying over American skies in 2015.

This is completely backward. It didn’t really hit me until my own crash, but the FAA is actually focusing its regulation on the wrong group. Companies typically need to carry liability insurance on the machinery they operate. A bad crash would be terrible for a brand, something that will make them more conservative about flights. The people with the least to lose are the casual hobbyist like me.

We think that the writer has a valid point about insurance concerns driving the commercial drone sector. But his call for the nanny state to stop him from acting foolishly is, well, silly. First, his argument assumes that the woman on the bicycle would have been left without a remedy had his careless behavior injured her or her child.

His in-laws probably have homeowner’s insurance. It might have adversely impacted his marriage, but any decent plaintiffs’ lawyer would have immediately looked into that.

And perhaps the writer has his own umbrella policy. He doesn’t say.

But the larger question is, at what point do we call on government to restrict the liberty of others, simply because some people act irresponsibly? We license car drivers, and have laws against reckless driving and drunk driving, but that doesn’t stop cars from being deadly missiles (much more dangerous than a 5-pound drone) in the hands of the wrong person. Requiring licenses for model aircraft operators seems grossly disproportionate to the risk involved.

If we are to remain a free and open society, on some level we have to still count on the virtue of individual citizens as the primary keeper of civic peace. Translated into contemporary parlance:

Don’t be an idiot, and we’ll all get along, just fine.

Music Video Would Be Illegal to Film in the U.S.!

Pop band OK GO is well-known for its innovative music videos, in which the band captures amazing in-camera effects and choreography in a single take. Their most recent video, for their song, “I Won’t Let You Down,” is no exception. This Busby Berkeley-style production will put a smile on your face:

How did they manage to do that? Well, part of the answer is that they used a drone.

It is perhaps worth noting that the video was filmed in Japan, where they seem to have taken a more pragmatic approach to drone photography than our own government. If this had been filmed in the U.S., OK GO might have been hit with a hefty fine, and we might have been deprived of the pleasure of watching this delightful production.

Michigan Company Developing Crash-Avoidance System

One of the primary goals of the FAA’s roadmap for drone integration is to develop an effective crash-avoidance system, or systems. A company in Michigan, called SkySpecs, has been working on a solution:

For the past five years, the SkySpecs team has been working on an object detection and avoidance system for aerial drones that could help even amateur pilots prevent dangerous collisions. Last week the company was accepted into the startup incubator R/GA Accelerator to help it get its first product, Guardian Crash Avoidance, to market.

The SkySpecs team, who met at the University of Michigan in 2009, started out by building their own drones for the International Aerial Robotics Competition competition. “I thought I wanted to do manned aircraft,” says Ellis, who was an aerospace engineering major. “But it was a time that drones were becoming popular and it seemed like a good opportunity.”

The problems of drone integration are going to be solved by entrepreneurs like these young people, not by bureaucrats.

We can all be superheroes, now.

Today’s edition of The New Yorker has a long article on the current and short-term future state of drone technology.  The author only alludes to the legal aspects of the technology, instead offering an overview of why drones can be both frightening and exhilarating.  He likens the power of drones to making operators into superheroes.

The technology of unmanned flight has diversified so rapidly that there are now 1,500 different kinds of drones being manufactured, and they are participants in nearly every type of human endeavor, composing a whole flying-robot ecology so vast that to call every one by the same name can seem absurd. But drone, an impossible word, is also a perfect one. Each of these machines gives its human operator the same power: It allows us to project our intelligence into the air and to exert our influence over vast expanses of space….

Lost in the concern that the drone is an authoritarian instrument is the possibility that it might simultaneously be a democratizing tool, enlarging not just the capacities of the state but also the reach of the individual — the private drone operator, the boy in Cupertino — whose view is profoundly altered and whose abilities are enhanced. “The idea I’m trying to work out to simplify this whole thing — surveillance, drones, robots — has to do with superhero ethics,” says Patrick Lin, a technology ethicist at California Polytechnic State University. “It’s about what humans do when they have superpowers. What happens then?”

Read the whole thing.

Self-Defense Against Drones

The use of self-defense against drones has become a hot topic, especially in the wake of the case of a New Jersey man who shot down a drone that was flying over a neighbor’s property.

In a guest column at the ever-valuable Volokh Conspiracy blog, law professor A. Michael Froomkin and his research assistant, Zak Colangelo, present their thoughts on the law of self-defense against drones. They offer a great deal of food for thought, and we recommend reading the whole thing. But while we agree with some of their arguments, we have reservations about others.

Froomkin and Colangelo begin with a general observation about the applicability of self-help doctrines to robots:

[W]hen a person fears for her safety, property, or privacy, the same self-help doctrines that govern other issues should govern a person’s use of self-help against a robot, whether that robot is operating on land, air, or sea. That is, an individual threatened with harm should be able to employ countermeasures that are reasonable in proportion to the harm threatened. The rule shouldn’t be different just because a robot poses the threat. Thus, as a general matter — but subject to some pretty important exceptions — a person who reasonably fears harm from a robot has a right to act to prevent that harm, up to and even in some — but far from all — cases shooting it down.

It is important to note, they point out, that the law treats robots as property. Because the law places a greater value on human life than it does on property, “[a]cts of self-defense that would be unreasonable when threatened by a human will in many cases be reasonable — in an otherwise similar situation — in response to threats from a mere chattel.” However, “[t]he toughest question is the scope of permissible self-help when individuals fear for their privacy rather than for their safety or property….” (emphasis added).

Froomkin and Colangelo point out that, whereas a threat to one’s property is easier to quantify, a threat to one’s privacy is not:

A trespassing, spying drone can do a lot of damage, but privacy harms are hard to monetize, especially ex ante.* That means it is hard to weigh the potential damage against the harm that the self-helper risks doing to the offending chattel. Not only is privacy is hard to value in general, but in this case the victim cannot know in advance how the operator of the drone intends to use the photos, hacked wifi, or whatever the drone may be collecting.

In light of this uncertainty piled on difficult valuation, we argue that the scope of permissible self-help in defending one’s privacy should be quite broad — else privacy will inevitably lack the protection it deserves. There is exigency in that resort to legally administered remedies would be impracticable — the drone will be long gone — and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen — reasonably — as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great, or even extant, at least when the risks of collateral damage are small.

While recognizing that those who operate drones on the periphery of one’s property probably have some valid First Amendment claims, Froomkin and Colangelo “understand why people would be concerned to learn that drones might someday aim telephoto lenses into their bedrooms from the sky.”

Because an average person is likely to be unable to immediately assess a drone’s threat to his or her privacy, they argue that “[t]ort law is likely to be solicitous of the property-owner’s need to make quick decisions under uncertainty. That solicitude will not, however, extend to actions that presented a reasonable risk of danger to third parties, such as shooting into the air in populated areas….” (emphasis added).

We whole-heartedly agree with the latter point regarding actions that present a danger to others, but think that their underlying assumptions are flawed. First, keep in mind that, as we have noted here, it is highly improbable that a civilian drone operating in Class G airspace is going to have any meaningful “spying” capabilities. It might be noisy and bothersome, but a drone is not very effective as a “peeping Tom” device (unless you’re sunbathing nude in your backyard or on your roof, in which case your claim to a reasonable expectation of privacy is probably dubious, at best).

Froomkin and Colangelo suggest that the uncertainty over drone capabilities could be resolved by, first, instituting a blanket ban on weaponized drones in the U.S. We think this is a reasonable suggestion. But, then again, how have blanket bans on weapons worked out in the past? The track record is not so great.

In any event, a small drone is unlikely to be a useful weapons platform. As anyone who has fired a gun can attest, the kickback from discharging a firearm would be just as likely to send a small drone tumbling out of the sky as it would be for the drone to hit its intended target.

Next, they propose that

all mobile robots should be required to carry warning markings, lights, and the equivalent of a Vehicle Identification Number (VIN) that would be recorded in a state or national registry…. Although far from perfect, these notices would be calibrated not just to warn of the drone’s presence, but also to say something about its capabilities, such as whether it carries a camera, and whether it is capable of capturing sounds or wifi or other information.

They further suggest that “[s]etting up a licensing regime and national or state-based registries would help connect a malfeasant robot to its owner or user, but no single system is likely to work in all circumstances.”

Froomkin and Colangelo seem to concede that their regime might well be unworkable, due in part to the problem of “cheating.” But we think that their proposal is far too complex, and relies far too much on unworkable regulatory regimes that will create more problems than they are likely to solve.

We propose a much simpler approach. First, we are persuaded that civilian “microdrones” should be regulated as consumer products, like cell phones and lawn mowers. We discussed that in a post on August 30, 2014. Off-the-shelf, consumer product regulations would solve many of the problems mentioned by Froomkin and Colangelo. For example, microdrones could be required to have built-in limitations on range and height. Naturally, they could be prohibited from having any sort of weapons capability.

Regulating microdrones as consumer products would also dispense with the need for a licensing regime. Model aircraft have been operated for generations without the need for mandatory licensing regulations. Maintaining a reasonable line-of-sight range limit for consumer model aircraft would more than adequately address concerns about hazards to public safety.

This sort of approach would remove much of the ambiguity concerning the capabilities of civilian microdrones. To the extent that any ambiguities might still exist, we think that the rules for using self-help against perceived threats from drones should be made abundantly clear.

We first note that we are adamant supporters of the right of self-defense as a fundamental human right. But with that comes an abiding respect for firearms safety and for the property rights of others.  Absent the sort of threat that would give rise to a justifiable use of deadly force under applicable state law, we would never advocate discharging a firearm in a heavily populated area. Froomkin and Colangelo seem to agree.

State legislatures could, if they wished, enact laws clarifying the circumstances in which a person may presume a threat of death or serious bodily harm from a drone, much like some state laws allow one to presume such a threat when an intruder invades one’s home or automobile. But we do not think that extending such a presumption to model aircraft would be a good idea.

The entire discussion might also be academic.

The FAA claims the authority to regulate or even prohibit the use of any flying object operated outdoors, no matter the altitude or distance from an airport, etc. If the FAA’s sweeping claim of jurisdiction is correct, then questions of federal/state preemption are going to necessarily come into play.

Current federal law prohibits the shooting down of any aircraft. Many small drones are already being regulated as “aircraft” by the FAA. Just consider the recent spate of 333 exemptions to the FAA’s “ban” on commercial drones. If one were to shoot down a drone being operated under such an exemption, it would be hard to argue that one had not just committed a federal crime.

But more than that, the FAA claims that the definition of “aircraft” includes model aircraft. Thus, according to the FAA, shooting down a model aircraft should be a federal crime. Until the FAA provides clarity on that, any discussion of whether a drone can be shot down by a civilian, under any circumstances, is unlikely to be useful.

*Lawyer-ese for “before the event.”

Facebook rolls out its high-altitude drones project

facebook-connectivity1

Facebook is rolling out plans to deploy high-altitude drones that would allow off-grid connectivity for users of its network. In addition to challenges in design, materials and technology, these drones will take us into some uncharted legal territory:

In order to fly its drones for months or years at a time, as it would have to do in order to provide consistent connectivity, Maguire explained, Facebook’s drones will have to fly “above weather, above all airspace,” which is anywhere from 60,000 to 90,000 feet in the air. That puts these drones on tricky regulatory footing, since there are essentially no regulations on aircraft that fly above 60,000 feet in the air. “All the rules exist for satellites, and we’re invested in those. They play a very useful role, but we also have to help pave new ground,” Maguire said.

Facebook and its counterparts will also have to find a way around regulations dictating that there must be one human operator to every drone, which could drastically limit the potential of such an innovation to scale. For proof, Maguire pointed to a recent solar drone demonstration by a British company, which ended after two weeks to give the pilots a break. “It’s like playing a videogame for two weeks straight with no rest,” he said. “We need a regulatory environment that will be open to one pilot perhaps managing 10 or 100 drones. We have to figure these things out.”

Other than the occasional spy plane or research balloon, what other traffic is at that kind of altitude?

More drone law-blogging at the WaPo

Michael Berry and Nabiha Sayed have a new guest post on developments in drone law at The Volokh Conspiracy blog. Today, they provide a pretty solid overview of the FAA’s slow start at promulgating UAS regulations. Recommended reading for anyone who might be new to the topic.

They promise to review the various lawsuits that have been filed, tomorrow.

A week of drone law blogging at the Washington Post!

Michael Berry and Nabiha Syed are guest-blogging at The Volokh Conspiracy this week in a series dedicated to the regulation of private drone use. Their first post, on “journo-drones,” is here. Today, they write about philosophical approaches to drone regulation:

As policymakers consider drone regulation – particularly with respect to privacy and safety – the possible fields of regulation fall into five principal realms: operators, flight, purpose, property and surreptitious use. Some of these categories face practical difficulties, while others present constitutional issues. Nevertheless, these five fields offer a framework to help make sense of the legislation and regulation emerging around the use of drones.

The authors intend to visit the history of the FAA’s piecemeal approach, tomorrow.

In the meantime, we would suggest that they consider the consumer product approach to sUAS regulations for one of their posts.