Jay Leno – himself a consummate car guy – once quipped: "A new study published by The British Medical Journal found that inactivity can kill you. I mean, these are the kinds of findings that just scare the hell out of Congress."

Self-driving cars must also scare the hell out of Congress too. This past June 14 – long after states began their forays into regulating self-driving cars – the U.S. Senate Commerce, Science, and Transportation Committee activated itself for a hearing – cheekily titled "Paving the Way for Self-Driving Cars" – aimed at advancing bipartisan federal legislation that would ease legal barriers facing automakers and technology companies as they develop autonomous vehicles. Prior to the hearing, Senators Gary Peters (D-Mich.), John Thune (R-S.D.), and Bill Nelson (D-Fla.) released a set of principles intended to govern future autonomous vehicle legislation. None of them are particularly controversial – and at bottom are certainly worthy polestars for future legislation – but they raise a host of thorny legal and policy questions, which are outlined below.

In particular, in crafting a set of federal regulations governing self-driving cars, the Senators focused on:

Prioritizing safety and educating the public.

This sounds self-explanatory, but testimony at the hearing revealed a rather startling fact: Even in the age of airbags, safety cages, and myriad vehicle assist systems designed to help a driver avoid an accident, U.S. traffic deaths in 2015 increased 7.2% over the prior year – the largest increase in the past 50 years. Preliminary numbers for 2016 show another substantial increase. No doubt that rapidly advancing technology has been a double-edged sword: knee-curtain airbags and blind-spot detection systems won't always save a driver whose eyes are glued to a text message, or who doesn't understand the limits of autonomous vehicle technology.

This is why the Senate principles quite prudently recommend that future legislation "address how companies can inform the public on what vehicles can and cannot do based on their level of automation and their individual capacities." Yes, safety warnings are generally good. But there are downstream implications: Not only are mandated government warnings fodder for future failure-to-warn/inadequate warning lawsuits, but they also risk diluting the appeal of the product to consumers. Tesla Motors has said that the Autopilot function in its cars "requires full driver engagement at all times." If that's the substance of a future warning slapped prominently on a steering wheel – and how could it not be, given the abundance of caution typically counseled by us lawyers – then many consumers may ask: "Then what's the point?" And that attitude, in turn, may reduce the incentive to further develop truly autonomous technology that no doubt has the potential to prevent many, many avoidable accidents and traffic deaths. Congress will need to walk this high wire carefully as it contemplates future legislation.

Promoting continued innovation, reducing existing legal roadblocks, and reinforcing the separate roles of state and federal governments – all while remaining "tech neutral."

The primary "roadblock" in regulating self-driving cars – as Senator Thune noted in his hearing Majority Statement – is that current transportation laws at all levels of government simply did not contemplate the rise of autonomous vehicle technology. Certainly, the Department of Transportation (DOT) regulates how cars are built, but not how they're driven – that job is primarily left to the states, who quite naturally have crafted their laws around what drivers must and mustn't do.

That regime goes out the window when a driver becomes irrelevant. Flipping the presumption of liability from man to machine in each and every rule of the road is no small task. Consider the modern-day equivalent of the trolley problem: suppose an imminent collision between a self-driving car with one passenger and a hapless pedestrian. The self-driving car senses that it can avoid hitting the pedestrian if it initiates an emergency avoidance maneuver, but that maneuver will in all likelihood kill the driver. Who lives? And does the equation change if the self-driving car knows (by virtue of sensors under the seats) that it has four passengers to the one pedestrian? Our current system of laws dodges the question through the concept of intent – if a driver did all she could have to avoid a collision with a pedestrian, but to no avail, we call it a tragic accident. If she intended to hit the pedestrian, or wouldn't have hit the pedestrian if she hadn't been drunk or texting, then we call it a crime. Even if we assume that it would be morally permissible (and therefore legal) for a self-driving car to intentionally kill one person to save the lives of three, try explaining that to the family of the dead pedestrian who can't send the owner of the self-driving car to prison (she didn't do anything wrong), and who can't sue the car manufacturer for negligence (the car did what the law said it was supposed to do – intentionally hit a person). Do we just simply accept this altered legal regime if the net result of driverless cars is to decrease the overall number of traffic deaths, as experts all say?

Equally vexing is the notion that the federal government has traditionally tread relatively lightly on automobile safety issues, mandating watershed safety features (for example, seat belts, airbags, and backup cameras) while allowing state and local governments to make safety calls at the margins (for example, a stop sign versus a traffic light, or a 75-mph speed limit versus a "reasonable and prudent" one). But a central component of the autonomous vehicle industry is standardization in the name of safety and efficiency. And how do you standardize when state borders get in the way? If New York requires that all self-driving cars have a driver, but New Jersey doesn't, then how can a future truck fleet operation – one that's banking on a driverless freeway from coast-to-coast – operate? And how can manufacturers supply trucks to this future company in a cost-effective manner if technological requirements differ from state to state? Indeed, this year alone, there have been 70 different pieces of autonomous-vehicle legislation in 30 different states.

These hard questions are certainly why automotive manufacturers have been calling on Congress to preempt state and local regulations governing self-driving cars. At the same time, automakers have asked for flexibility in existing federal rules: In a written statement provided to the Senate Committee at the June 14 hearing, the Alliance of Automobile Manufacturers (AAM) asked Congress to pass legislation significantly expanding the number and duration of certain exemptions that NHTSA may grant to the Federal Motor Vehicle Safety Standards codified at 40 CFR Part 571 et seq. AAM's stated concern is that without these exemptions, "developers will not be able to deploy the technology at a scale necessary to collect more robust real-world data to inform future regulatory action." Example: what's the point of requiring an inside rearview mirror to "provide a field of view with an included horizontal angle measured from the protected eye point of at least 20 degrees, and a sufficient vertical angle to provide a view of a level road surface extending to the horizon beginning at a point not greater than 61 m to the rear of the vehicle when the vehicle is occupied by the driver and four passengers or the designated occupant capacity, if less, based on an average occupant weight of 68 kg[,]" 40 CFR § 571.111(S.5.1.1), if there's no driver to look in said mirror?

The challenges outlined above are no doubt why (as AAM did in its Hearing Statement) autonomous vehicle manufacturers are calling for guidance and voluntary standards – like the Federal Automated Vehicle Policy released by DOT in 2016 – as they continue to tinker with and perfect automated vehicle technologies. Which implicates another key principle articulated by the Senate Committee – that legislation mustn't favor certain autonomous-vehicle business models over others. Rather, "[s]elf-driving vehicles are likely to take different forms, use diverse technologies, serve consumers with varying capability levels, and follow multiple business models." That's undoubtedly true, but it's much easier said than done – and especially at the federal level. If the aforementioned trucking company wants a standardized system of driverless trucks than runs on a coast-to-coast highway with driverless cars (because that's the cheapest and safest way to move freight), but Ferrari doesn't want to build a fully autonomous car (because that would materially dilute its brand), who wins?

Congress will likely continue to face these and other tough questions as it rouses itself into action on autonomous vehicles. So as they say – watch this space. And for the record: Jay Leno probably wouldn't buy an autonomous Ferrari.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.