
Q and A with Maura Grossman: The ethics of artificial intelligence
Can we reach consensus on how AI will be used, regulated and interwoven into society?
Can we reach consensus on how AI will be used, regulated and interwoven into society?
By Joe Petrik Cheriton School of Computer ScienceMaura R. Grossman, JD, PhD, is a Research Professor in the Cheriton School of Computer Science, an Adjunct Professor at Osgoode Hall Law School, and an affiliate faculty member of the Vector Institute for Artificial Intelligence. She is also Principal at Maura Grossman Law, an eDiscovery law and consulting firm in Buffalo, New York.
Maura is best known for her work on , a supervised machine learning approach that she and her colleague, Computer Science Professor Gordon V. Cormack, developed to expedite review of documents in听.听She teaches Artificial Intelligence: Law, Ethics, and Policy, a course for graduate computer science students at 蓝莓视频 and upper-class law students at Osgoode, as well as the ethics workshop required of all students in the master鈥檚 programs in artificial intelligence and data science at 蓝莓视频.
Artificial intelligence is an umbrella term first used at a . AI means computers doing intelligent things 鈥 performing cognitive tasks such as thinking, reasoning, and predicting 鈥 that were once thought to be the sole province of humans. It鈥檚 not a single technology or function.
Generally, AI involves algorithms, machine learning, and natural language processing. By algorithms we simply mean a set of rules to solve a problem or perform a task.
There are basically two types of AI, though some people believe there are three. The first is . This kind of AI does some task at least as well as, if not better than, a human. We have AI technology today that can read an MRI more accurately than a radiologist can. In my field of law, we have technology-assisted review 鈥 AI that can find legal evidence more quickly and accurately than a lawyer can. Other examples are programs that play chess or AlphaGo better than top players.
The second type is ; this kind of AI would do most if not all things better than a human could. This kind of AI doesn鈥檛 yet exist and there鈥檚 debate about whether we鈥檒l ever have strong AI. The third type is , and that鈥檚 really more in the realm of science fiction. This type of AI would far outperform anything humans could do across many areas. It鈥檚 obviously controversial though some see it as an upcoming existential threat.
AI is used in countless areas.
In healthcare, AI is used to detect tumours in MRI scans, to diagnose illness, and to prescribe treatment. In education, AI can evaluate teacher performance. In transportation, it鈥檚 used in autonomous vehicles, drones, and logistics. In banking, it鈥檚 determining who gets a mortgage. In finance, it鈥檚 used to detect fraud. Law enforcement uses AI for facial recognition. Governments use AI for benefits determination. In law, AI can be used to examine briefs parties have written and look for missing case citations.
AI has become interwoven into the fabric of society and its uses are almost endless.
AI isn鈥檛 ethical, just as a screwdriver or a hammer isn鈥檛 ethical. AI may be used in ethical or unethical ways. What AI does, however, is raise several ethical issues.
AI systems learn from past data and apply what they have learned to new data. Bias can creep in if the old data that鈥檚 used to train the algorithm is not representative or has systemic bias. If you鈥檙e creating a skin cancer detection algorithm and most of the training data was collected from White males, it鈥檚 not going to be a good predictor of skin cancer in Black females. Biased data leads to biased predictions.
How features get weighted in algorithms can also create bias. And how the developer who creates the algorithm sees the world and what that person thinks is important 鈥 what features to include, what features to exclude 鈥 can bring in bias. How the output of an algorithm is interpreted can also be biased.
Most regulation so far has been through 鈥渟oft law鈥 鈥 ethical guidelines, principles, and voluntary standards. There are thousands of soft laws and some have been drafted by corporations, industry groups, and professional associations. Generally, there鈥檚 a fair degree of consensus as to what would be considered proper or acceptable use of AI 鈥 for example, AI shouldn鈥檛 be used in harmful ways to perpetuate bias, AI should have some degree of transparency and explainability, it should be valid and reliable for its intended purpose.
The most comprehensive effort to date to generate a law to govern AI was proposed in April 2021 by the European Union. This is the first comprehensive AI regulation. It classifies AI into risk categories. Some uses of AI are considered unacceptably high risk and they tend to be things like using AI to manipulate people psychologically. Another prohibited use is AI to determine social scores, where a person is monitored and gets points for doing something desirable and loses points if doing something undesirable. A third prohibited use is real-time biometric surveillance.
The next category are high-risk AI tools like those used in medicine and self-driving vehicles. A company must meet all sorts of requirements, conduct risk assessments, keep records, and so on before such AI can be used. Then there are low-risk uses, such as web chatbots that answer questions. Such AI requires transparency and disclosure, but not much else.
It鈥檚 very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness. You may think that fairness means the algorithm should treat everyone equally. I might believe that fairness means achieving equity or making up for past inequities.
Our human values, cultural backgrounds, and social expectations often differ, leaving it difficult to determine what an algorithm should optimize. We simply don鈥檛 have consensus yet.
That鈥檚 a difficult question to answer. There is definitely something to be said for transparency and explainability, but in many circumstances it may be good enough if the AI has been tested sufficiently to show that it works for its intended purpose. If a doctor prescribes a drug, the biochemical mechanism of action may be unknown, but if the medication has been proven in clinical trials to be safe and effective, that may be enough.
Another way to look at this is, if we choose to use less sophisticated AI that we can more easily explain, but it is not as accurate or reliable than a more opaque algorithm, would that be an acceptable tradeoff? How much accuracy are we willing to give up in order to have more transparency and explainability?
It may depend on what the algorithm is being used for. If it鈥檚 being used to sentence people, perhaps explainable AI matters more. In other areas, perhaps accuracy is the more important criterion. It comes down to a value judgment.
Interested in learning more about ethical AI? on the Cheriton School of听Computer Science website.
Read more
Prof. Jimmy Lin spearheads creation of new search engine
Read more
A mathematical physics student uses his AI skills to make people laugh
Read more
Researchers are developing a deep learning algorithm that could act as an early warning system against runaway climate change.
The University of 蓝莓视频 acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg, and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations.