Private ‘Rights Enforcement Agencies’ allow a granular approach, better for finding the proper balance between the harms and benefits of citizen gun ownership

Rights Enforcement Agencies

The economist and legal scholar David Friedman has imagined a future where, instead of having police and courts provided by the state, people access the legal system through private organisations he calls Rights Enforcement Agencies. These REAs settle disputes between their clients by bringing them before a private judge.

For a present-day precursor of the eventual REAs, think less of the infamous Blackwater mercenaries (nominally a private firm, but primarily a government contractor) and more of Dale Brown’s Threat Management; an organisation selling its protection and prevention services directly to private residents of Detroit.

The incentives that move REAs are different from those that a traditional police force faces. An REA depends on pleasing its subscribers directly, while a police force is beholden to political masters. This means we can expect an REA to conduct itself quite differently to a police force. Brown gives one example:

Police departments are tasked with arresting people after broken the law. So if you think about it, law enforcement’s way of thinking is based upon prosecution and ours is based on prevention.

Balanced incentives

Rights enforcement agencies try to attract and to keep subscribers to their services. Some of those potential customers will want to have access to firearms. REAs have several related reasons to have a permissive stance towards gun ownership among their clients.

  1. REAs know that they can’t be there in time to protect their clients in case they’re attacked. If a client is killed by an attacker, the REA has one less paying customer.
  2. Unlike governmental police forces, who have only very weak incentives to look out for the interests of the citizens they are imagined to protect, REAs who do a bad job of satisfying the preferences of customers (including a preference to own a gun) are more likely to lose out to competitors who do better.
  3. In cases of violence against customers the REA would have to use scarce resources to apprehend and prosecute offenders. Because if they fail to apprehend an offender, their customers (and future prospective customers) are more likely to choose an alternative REA to represent them in the future. REAs have a reason to look favorably on anything that can act as an effective deterrent against violence, including firearm ownership.

So REAs have an incentive to help customers who want to have the means to defend themselves, or ward off would-be attackers, and owning a gun is an effective way to do that.

On the other hand REAs also have an incentive to try to minimize the likelihood that their own clients harm anyone who is not threatening to violate their rights — either deliberately or accidentally. Think of fatal crimes of passion, and unintentional discharges. Access to a gun makes these unwanted outcomes more likely. Why would an REA want to avoid these things happening?

  1. In a case where a customer kills a customer of another REA, the REA needs to expend resources on participating in the legal process initiated by the rival REA.
  2. In a case where a customer kills another customer of the same REA, the procedural costs might be lower since an external judge may not be required. But the REA is disadvantaged anyway because it has lost a paying customer.

So REAs also have reasons to restrict gun ownership among their clients.

Neither a totally permissive policy nor prohibitionist policy is likely to be optimal with respect to revenue; both mean offering a service that is a poor fit with the preferences of one sub-population of possible customers. Instead I believe that REAs are likely to seek a middle way.

The REA has protocols in place that help to the client to return to the low-risk bracket again. Call it risk mitigation therapy.

Contracts

Because the REA’s relationship with a client is subject to an explicit contract, the REA can use that agreement to secure prospective clients’ consent to various measures that would allow gun ownership while minimising the likelihood of the client harming an innocent.

An REA contract could specify that clients agree not to handle or own any firearm except under the conditions listed. A client would be incentivized to adhere to this stipulation, not only because they want to remain a client of the REA they have chosen, but also because adherence would increase the likelihood of them being exonerated in any case of legitimately defensive firearm use.

Smart Guns

One condition may be that clients are permitted only to use a firearm issued by the REA itself. These would be ‘smart guns’ designed to facilitate the intermediate goal of the REA to minimize costly rights violations. The details of these weapons aren’t crucial to the broader picture here but I’m imagining features like fingerprint-activation, with on-board video camera(s), microphones, torch, and motion sensors. The sensors begin recording data to circular buffers as soon as the weapon is picked up. The recordings can be used by the REA to verify that any given discharge of the weapon was contractually permitted.

Perhaps the gun is recalled and replaced with a new one on a repeating schedule, so that the REA can inspect it for signs of tampering (contractually forbidden of course), or discharges unaccounted for, and analyse any recordings to help streamline their operations.

On the other hand such a clause may be a deal-breaker for some; a gun which can be disabled remotely is a serious security concern.

Profiling

It costs an REA money if a client gets into legal trouble; the legal process puts a demand on the firm’s resources. Pretending I’m the CEO of a REA, then especially if my client has a gun, I’d like them to be in a mental and physiological state that minimizes the likelihood of them doing any harm to others. There are some constraints on how I can solve that problem. For instance, it wouldn’t do to drug my gun-owner clients into a permanent sleep-like state. I want to attract and retain customers. The solution I settle on needs to be one that wouldn’t horrify people who are shopping around for a REA.

But if I aim to promote a mind-body state in my subscribers that fits adjectives like calm, rested, content without sacrificing alertness — a state that many people would like to be in anyway — that’s a proposition that gun owners might not find abhorrent.

  • Concretely, perhaps gun-owning clients must consent to wearing a bio-feedback gathering device, or submit to regular lab tests. The REA might use a neuroprediction approach in trying to forecast the client’s future behavior.
  • Perhaps the subscription fee of the client would be variable based on the outcomes of these tests or feedback. If the statistical analysis places the client in a group more likely to commit acts of aggression, they are notified. In a smart-guns scenario their access to any REA-issued firearm is locked. Perhaps if a second test in a month’s time doesn’t show improvement, their subscription fee is increased according to a pre-established schedule.

Especially once their datasets reach a certain size, it seems likely that REAs would use machine-learning models to predict instances of violence. It’s important to notice the REA would have a financial incentive to make sure its risk model and profiles track reality closely at all times. If the REA makes it difficult or expensive for peaceful, low-risk people to keep a firearm those people are more likely to go to a competitor with better calibration.

On the other hand, if the REA's firearm-owning customers harm innocents they also pay a financial cost — costs of the legal process, a possible requirement to compensate the victims or heirs, and the possible loss of a paying customer (who isn’t in a position to continue being a client after the legal process is complete).

Collaborating for wellness

The client prefers to have access to a firearm and to pay less, and the REA wants to keep them as a client. So the REA has protocols in place that help the client to return to the low-risk bracket again. Call it risk mitigation therapy. This therapy could take any approach shown to have an effect on a person’s proclivity to violence. It might incorporate interventions related to nutrition, sleep, mental exercise, counselling, or a drug regime.

Financial incentives on both sides are aligned so that the REA and client effectively collaborate to minimize the risk the client poses to the rest of society.

Objection: Futurecrime

In the Philip K. Dick short story, “The Minority Report”, a division of ‘Precogs’ aid the police by accurately foreseeing crimes. Would-be criminals are apprehended before they have harmed anyone. In the situation I’ve described above, an REA’s AI resembles the Precog division to a limited degree. A concern might be that the situation in the Minority Report universe is an unjust dystopia, and we don’t want a situation that resembles that.

There are important differences between the Minority Report world and any situation that could obtain in real life in a condition of competitive REAs. The Precogs, and the police they collaborate with, were agents of the state, which is an ultimate decision-making authority with a monopoly on the right to initiate violence. The state has no local competitors, so ‘exiting’ the relationship with the local state is usually extremely costly for a citizen. Consequently the state, and by extension its agents, feels very little pressure to treat citizens well.

By contrast, an REA which has acquired the ability to accurately predict a client’s behavior, is operating within a network of sovereign institutions and actors. Unlike the state (nominally constrained by a constitution, but in practice free to disregard or ‘reinterpret’ it), it could not unilaterally decide to preemptively imprison people even if it placed a very high likelihood on their committing crime in the future; the rest of society would be unlikely to permit it without a fight. We might expect impediments to such belligerence from:

  • Pressure from competing REAs — eager to accept new paying customers.
  • Insurance firms — who underwrite REAs for legal fallout and provide them with ‘insurance-approved’ certification on the basis of which communities would accept or bar them.
  • Private judges — who have heavily invested in their reputation for being dependable issuers of verdicts that accord with our most deeply shared intuitions about justice.

Objection: Imbalance of power in favour of REAs

Here’s a concern that carries more weight for me; under this arrangement, the REA has access to lots of data on their customers. That data is used to predict likelihood of violence but it could be used for other purposes, and by other parties, too.

The data could be sold, for instance, without the client’s consent. Or analysed for some purpose not communicated by the REA to the client. For instance, perhaps the REA routes fewer units to neighbourhoods with clients whose physiology makes them less likely to get angry and complain about slow response times.

One response could be that customers have to suck it up; the REA is able to charge you less for the service they provide because they are able to use this information about you to limit their uncertainty. Customers have to factor in the likelihood of leaks and stolen/sold/abused data in that cost/benefit calculation.

But this doesn’t feel satisfactory to me, especially if we consider the possibility that people may be subject to cognitive biases causing them to reliably undervalue their privacy; others might not appreciate the long-term costs of giving away data which look harmless in the present but might put them, or their friends and family, at a serious disadvantage when that data is aggregated — particularly when many others are similarly surrendering their data.

Perhaps this is a problem that’s only really resolved in the ultimate equilibrium of Brinnian Omniveillance (reasonable people might disagree with me that this route offers any equilibrium habitable by humans). In this situation, any information about any party is available to those who choose to look it up. So while a REA can access all information about their clients, the clients can do the same with respect to all agents of the REA. And because the act of ‘abusing’ information about a person is itself surveilled and indelibly recorded, there emerges a norm where accessing ‘private’ information about others is something only done in very special circumstances.

Granular governance delivers a better fit to our values

I see the scheme above as part of the move towards a society that operates on incentives instead of directives, to use Justin Goro’s phrase.

The control of guns needn’t take the form of rules imposed unilaterally by a commanding power determining who may own a firearm. Instead we’re all better off if private companies and their clients end up cooperating — thanks to price signals and market competition — to arrive at a balance between the risk of harm to innocents on the one hand, and the capacity of innocents to protect themselves on the other.

Unlike the blunt instrument of state-imposed regulation, this approach is distributed, its costs are internalized, and has the capacity to be highly granular. The result, I believe, would be a much better net value fit — many more people’s preferences satisfied to a far greater degree than under the status quo.

Thanks to Justin Goro and Max Borders for critical questions and editing.


Read more? All articles

I create videos with the help of my patrons great and small.
Find out more about supporting my work.

Would you like to get emails now and then when I publish new things? Leave your address here.