Tuesday, 5th November 2019, saw a policy debate over facial recognition technology. The International Business Machines Corporation (IBM) argued against an outright ban of facial recognition technology. Instead, the US computing giant called for “precision regulation” to protect privacy and civil liberties.
In a white paper posted on its website, IBM said policymakers should understand that “not all technology lumped under the umbrella of ‘facial recognition’ is the same.”
IBM said uneasiness about artificial intelligence technology which can use face scans for identification was reasonable.
The paper by IBM chief privacy officer Christina Montgomery and Ryan Hagemann, co-director the IBM policy lab, said,
“However, blanket bans on technology are not the answer to concerns around specific use cases. Casting such a wide regulatory net runs the very real risk of cutting us off from the many — and potentially life-saving — benefits these technologies offer.”
The comments come amid intense debate over deployment of facial recognition for applications in security and law enforcement, among others.
San Francisco and other cities have moved to ban facial recognition by government entities and privacy activists have called for better guarantees against errors and bias.
Amazon recently said it supported regulations for facial recognition. Microsoft, last year, announced it was adopting a set of principles for the technology. It also called for new laws to avoid a “dystopian” future.
IBM said that policymakers should not ban all facial recognition technology. Instead, they should employ “precision regulation” in cases where there is “greater risk of societal harm.”
The company said a full ban might deny consumers the convenience of less frustrating air travel or prevent first responders rapidly identifying natural disaster victims. But IBM also said there were uses that should remain off-limits, such as mass surveillance or racial profiling.
Montgomery and Hagemann said,
“Providers of facial recognition technology must be accountable for ensuring they don’t facilitate human rights abuses by deploying technologies such as facial matching in regimes known for human rights violations.”
IBM suggested basing the rules on “notice and consent” when anyone uses facial recognition to verify someone’s identity. That would require stores seeking to “customise” someone’s experience to provide clear notification that they are using face analytics.