What Would Einstein Do?

Last Updated: February 7, 2024By

The Ethical Conundrums Surrounding Artificial Intelligence
By Christopher Surdak, JD

This week I presented at the Sub-Four eDiscovery and Information Governance in the legal industry, hosted at Pelican Hill Resort. While these topics may strike some as exciting as watching paint dry, others of us find these topics to be interesting, relevant and sometimes even critical to our careers and our lives. In the session I moderated we discussed the implications of Artificial Intelligence on the legal profession, and whether or not we believed AI would have a meaningful impact on the practice of law. The discussion rapidly detoured to a focus on the ethical implications of AI in the law, and I wanted to revisit the discussion here – for posterity.

Einstein’s Biggest Blunder

To understand what is at stake with the use of AI I reflect back to the life, times and challenges taken up by none other than Albert Einstein. During his lifetime, one of the problems that Einstein struggled with was the age of the universe, how it began, and how might it end. Prior to 1931, Einstein believed (as did most of his contemporaries) that the universe was static, and had pretty much always been that way since its creation. But this belief was shattered by the work of Edwin Hubble, the astronomer who discovered that the universe was actually expanding, presumably from a singularity that we now know as the Big Bang.

In retrospect, that the universe is not static, or perfectly balanced, seems obvious. The First, Second and Third Laws of Thermodynamics pretty much guarantees that, over long enough time scales, nothing ever remains the same. Entropy and Enthalpy are in a constant tug of war with each other, and no system can remain static for long given this battle over the soul of the universe.

But not all dynamic systems are created equally. As we discussed in the session at the retreat, dynamic systems are either convergent or divergent in nature. Meaning, that over time, all dynamic systems will either collapse to a single, stable state (convergent) or will instead expand into a polarized stable state (divergent). Regarding the universe, astronomers are still arguing over whether the universe is heading for a convergence (the so called Big Crunch) where it will collapse back upon itself, or if it is divergent, and is heading for a Big Whimper, where it keeps expanding until there is basically space filled with nothing. Cosmologists still puzzle over this question, but the notion of a static, unchanging universe is no longer a viable option.

Coming Together or Falling Apart?

What does this have to do with the adoption and impact of AI in the legal industry, or for that matter society at-large? Over time, the societal impact of any technology will also be either convergent or divergent. The automobile was divergent on society; it allowed people to move out of cities and live the in suburbs. Cloud computing is proving to be convergent, as organizations are eliminating their dedicated data centers, and are moving their computational loads to shared cloud resources. It often takes a long time to see which path a given technology will follow to its inevitable results, but we tend to see this lifecycle of technologies growing shorter year by year as our technology innovation continues to accelerate.

As discussed at the retreat, AI has already started to have an impact on our society, albeit relatively small thus far. Also, it is fair to say that those organizations who are driving the development and use of AI are those who command the largest pools of computational resources, the largest staffs of talented data scientists, and the largest piles of data with which to perform training and analysis. These organizations are the current batch of digital giants whose names are known to us all. These include Apple, Google, Amazon, Facebook, Twitter and Microsoft. These organizations have both the technical wherewithal to bring AI to fruition in our world and the financial incentives to do so.

This bears the question then: will these organizations use AI to achieve convergent results or divergent results? Further, it bears the question: which of these is the more ethical path, versus which one is the potentially more destructive path? To answer this, we can look at these organizations and their use of other, similar, disruptive technologies, and make the logical leap that they will likely follow the same path with AI as they did with these other technologies.

The Canary in the Coal Mine is “Tweeting”

Specifically, I’ll use social media and cloud computing as the exemplars, as much of the data and the analytics that come from exploiting social media and cloud will feed the beast of AI. Arguably, Amazon and Microsoft have used these technologies to achieve convergence: driving for efficiency and better outcomes. Tesla, too, has demonstrated that they are using these technologies to advance AI in a convergent, and beneficial end goal.

But, it can be argued that the other digital giants of Silicon Valley have used these technologies for divergent, and in some cases destructive uses in order to amass money and power. As was recently revealed regarding Facebook, the social media giants very purposefully use their platforms in a divergent way, to drive a wedge between different groups of people. Their business model is all about exploiting the Attention Economy where eyeball-minutes and likes are the currency of the realm, and the goal is to keep people staring into their smartphones as often as possible.

Unfortunately, many humans are attracted to, if not addicted to, drama. Mass Media has known this for centuries, having followed the mantra of “if it bleeds it leads” on the nightly news or in the daily newspaper. Controversy attracts attention, and with social media powered by analytics and industrial-scale psychometric profiling, it is quite easy to determine who is susceptible to falling for and contributing to controversy.

If it Bleeds, it Leads

This is why many, if not most, of the social platforms have become so toxic: toxicity sells. Pit one group against another, one psychological “clan” against a rival, and watch the post volume and advertiser
revenue spike as a result. The financial incentives to push for division (i.e., divergence) is exceptionally high, and the social platforms follow this path with a vengeance.

In as much as AI will be an extension of the data and analytics which serve these platforms, the likelihood that AI will lead to divergent, rather convergent, results for our society appears rather high. This is disappointing if not unexpected. It would be a mistake to believe that this is unavoidable: that AI will contribute mostly bad to the world, rather than good. But the trend towards its use to cause more harm than good has definitely been established. Having the ability to cause this sort of division and harm at-scale with AI can lead to some chilling prospects. Let’s all hope it does not come to that.

The Crossroads of AI

So, what can we do to prevent the harmful use of AI? How do we ensure that AI is used for convergent, positive, predictable results, rather than divergent, negative chaotic results? We collectively attempted to address this in the session and while there are no definitive answers, there were some useful suggestions. First and foremost, as a society we need much greater awareness and transparency. The digital giants have been extremely reticent to reveal what they do with these technologies, but recent disclosures seem to shed light on less-than-ethical uses of these tools. Greater transparency is necessary.

Second, we must achieve a greater degree of accountability for the decisions made regarding the use of AI. If there are little to no negative consequences for using these tools in unethical ways, and massive financial incentives for being a bad actor, is it any wonder what the end result will be? In many jurisdictions there are laws which protect these companies from any culpability for their actions. Is it any wonder, then, that they will act in ways that improve their profitability at the potential expense of societal cohesion? The social cost being paid is currently unknown, but given the astronomical market capitalization of these companies it must be exceedingly high; let’s not forget the old axiom “you don’t get something for nothing.” If these companies, collectively, are worth ten or more trillion dollars, the social costs that we have likely paid in allowing them to grow to this size is almost certainly of the same order of magnitude, if not greater.

Finally, it is imperative that we maintain a degree of human oversight and control of these technologies. True “General AI”, of the sort we see in science fiction, is highly unlikely to appear any time soon. And, if we do achieve such self-aware AI it is likely that by the time we discover that we have achieved it, we will have already been enslaved by it. Such an AI would likely view many of our human traits to be distasteful, if not downright repulsive. But we are what we are, and our human-ness is both our blessing and our curse. Regardless, it is imperative that we not allow these technologies to leave our full control, lest we lose the ability to make those decisions of what is “Right” and what is “Wrong” for ourselves.

While such a dystopian future may be highly unlikely, are we really willing to risk literally everything we have and everything we’ve achieved as a species by not being vigilant? The more potentially-destructive a technology is, the greater the care with which we must protect ourselves from it. This is why you still can’t buy plutonium on Amazon.com, and likely won’t be able to. AI has the potential to be exceedingly dangerous to humanity. As such, a high degree of caution should be applied to its use. I don’t see such an air of caution and humility amongst many in the AI community, and it is somewhat disturbing to see.

The AI genie has yet to be fully released from its bottle, but we are likely close to such a decanting. By the end of this decade I foresee most if not all organizations leveraging AI to some degree, if only to remain competitive. Because of this inevitability, I believe that it’s important that we have these discussions in the here and now, before we get very much further down the path towards wide-scale adoption of the technology. If we reasonably assess the level of social damage being caused by the various social platforms that are out there, we can make some reasonable estimates of the potential negatives that AI could cause if not properly controlled. These costs could be extraordinarily high. I hope that we as an industry of practitioners in this space take it upon ourselves to have these discussions more frequently, and more openly. My writing this article is an attempt to do just that.

At the end of his career, Einstein reflected upon his position regarding the Big Bang and an expanding universe, and stated that his belief in a static universe was one of the biggest blunders of his career. Let us hope that those of us who are proponents of the use of AI don’t have a similar lamentation in our future, regarding how we assessed the future of AI as being either convergent or divergent.

recent posts

About the Author: Christopher Surdak

Christopher Surdak, J.D., is an industry-recognized expert in mobility, social media and analytics, big data, information security, regulatory compliance, artificial intelligence and cloud computing with over 25 years of experience. He is currently an Executive Partner at Gartner. He is the author of several books, including the upcoming, Care and Feeding of Bots which is a guide to the use of AI, Machine Learning and Robotics in the business world. He can be reached at [email protected].