Social Physics Lecture on Legal Ethics and Machine Learning

November 23, 2016

On December 30th, 2016, Dazza Greenwood and invited guest lecturer Kathryn Hume present "Legal Ethics and Machine Learning" to the Social Physics graduate seminar at the MIT Media Lab.  


Background Materials:

Scenario: Automated Legal Advice Service

Assume a data driven, algorithmic speech or text based (ie: "chatbot") service that provides professional advice to individuals (ie: legal, financial, medical, etc). 

  • Challenge Questions 1:  What methods and techniques of Social Physics (including training the model) would you use to ensure a legal service provided "client-centered"  advice in a way that prioritized first the interests and autonomy of the client?  (See the reading on "The Client-Centeredness and Client Autonomy Model")
  • Challenge Questions 2:  To evaluate how accurately and effectively the service applied these client-centered priorities, what would you measure and how would you test?
  • Challenge Questions 3: How and to whom (or what) should rules of professional ethics and fiduciary duties be applied?  What new or different rules of ethics or other safeguards are needed to address problems of "bias" identified in the reading "ABA Predictive Coding"? 



For the sake of keeping a record and perhaps to pull out some of the good bits, an initial draft of the discussion questions is included below:

* Scenario 1: ML/AI Legal Service Provider Replacement for Lawyer

 - Can ML and AI systems provide sound legal advice and conduct meetings with clients in accord with the approach described in the readings on Client-Centered Legal Consultation?

 - If an individual client receives legal services directly from an ML enabled expert system, how and on whom (or what) should standards of professional conduct and fiduciary duties apply?


* Scenario 2: Desktop/Laptop/Smartphone Based Expert System/Decision Support/ML/AI Legal Service Provider Augmenting the Client During Consultation With Attorney 

 -  When a client can accurately identify and assess probabilities of relevant risks by aid of  ML/AI enabled personal computers how are attorney ethical duties impacted?


* Scenario 3: Wearable/On-Body/Subdermal/Embedded ML/AI Enabled Service Extends the Cognition of Client During Consultation With Lawyer (and "in the world") 

  - If ML/AI "extended cognition" services can continually enhance client understanding & decision making capacity needed to navigate legal issues, should an attorney provide access if available?

  - If available "extended cognition" services enhance understanding of some issues but not others, how can a lawyer know if the client is in a better or worse position to make decisions? 


* Scenario 4: Ethical and Legal Issues with Bias Related to "Extended Cognition" Services

  - What, if any, level or type of actual bias impacting client understanding and decision making should be prohibited during lawyer-client consultation?

  - What criteria and testing method would be appropropriate to measure the level of bias induced by or resulting from such services?

  - What types of decisions should highly biased cognitive enhancements be prohibited from impacting? 

  - What level or types of bias should violate consumer protection or commercial law?  

  - Does it matter if bias of cognitive extender services is deliberately designed to favor the commercial provider of the extended cognition services?  

  - What if the provider set a bias that increased consumer acceptance of price increases and reduction in scope of warranty for it's own service?


* Scenario 5: Bias Introduced at the Behest or on Behalf of Third Parties that Impacts Understanding and Decisions of Legal Clients, Consumers and Citizens In General:

  - Is it ethical for lawyer to provide extended cognition services to a client if the lawyer knows deliberate bias exists likely to impact inferences and associations the client will make? 

  - Should commercial providers of such cognitive extenders be prohibited from allowing deliberate bias in return for payment by companies to influence consumers to favor their products? 

  - Would it matter if the bias paid for by 3rd parties impacts understanding, inferences, associations and decision making in a way that changes national election results?

  - Should a consumer be made aware they are being exposed to the above types of deliberate bias (eg via real time alerts, later receipt of logs, general advance warnings, etc)? 

  - If consumers should be made aware of the above, how and when should such notices be delivered, what types of notices are appropriate?  How much notice would be too much?

  - Can any legal signature or authorization be validly provided by a person deciding under the influence of perceptual, logical or emotional bias intentionally induced by cognitive extenders?    

  - Can informed consent to participate in human subjects research or the legitimate consent to be governed ever be valid when provided under the influence of the above types of bias?  


* Final Reflections on Extrapolating the Role of Fiduciary Professionals into the Age of AI: 

  - Can Expert Fiduciaries Sworn to Put Individual Clients Interests First Ensure Fair Value Exchanges and Provides Adequate Safeguards for Use of Extended Cognition Services?

  - Yes. The role of such professionals is to protect and serve in this way.  

  - Attorneys and certain other fiduciaries like non-profit collectively owned federal credit unions must gain expertise with data science to correctly perform their function for individuals and society.