Political Philosophy and Ethics discussion

101 views
Both Pol. and Ethical Philosophy > Artificial Intelligence

Comments Showing 1-13 of 13 (13 new)    post a comment »
dateUp arrow    newest »

message 1: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
Recent advances in artificial intelligence (AI) have prompted wide public discussion of its propriety or impropriety and what, if anything, government should do about it.

I am not an expert on this subject. I am, however, currently aware of three major uses of AI:

1. For research purposes: I myself have used Microsoft Bing AI as a research tool and have found it highly effective, saving me hours of time that I otherwise would have had to spend locating publications on specific subjects.

2. To create essays, creative literature, term papers, etc.: I regard this as unethical if it is done to pass off AI writing as one’s own. I see this as being mostly a problem for high schools, colleges, and universities. I understand that these institutions are developing tools to deal with this problem, just as they have developed tools in the past for detecting plagiarism. More broadly, it falls within the plagiarism policing of public media generally, including but not limited to copyright enforcement.

3. Military uses: see my post 262 (October 20, 2023) and my post 267 (October 31, 2023) in the “International Law and Politics” topic of this group.

For other discussions in this group on AI, do a search for “artificial intelligence” in the “search discussion posts” box located at the top right of each webpage in this group.

Please feel free to comment on these or other uses of AI or on your own views regarding this complicated subject.


message 2: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
The Race to Regulate Artificial Intelligence: Why Europe Has an Edge Over America and China

Τhe foregoing is the title of this June 27, 2023 Foreign Affairs article by Anu Bradford: https://www.foreignaffairs.com/united.... Bradford is Professor at Columbia Law School and the author of the forthcoming book Digital Empires: The Global Battle to Regulate Technology. (This Foreign Affairs article can be freely accessed, notwithstanding a subscription paywall, by agreeing to receive weekly/occasional emails from Foreign Affairs regarding their current articles.)

The third paragraph of this article summarizes its content:
With tech companies racing to advance artificial intelligence capabilities amid intense criticism and scrutiny, Washington is facing mounting pressure to craft AI regulation without quashing innovation. Different regulatory paradigms are already emerging in the United States, China, and Europe, rooted in distinct values and incentives. These different approaches will not only reshape domestic markets—but also increasingly guide the expansion of American, Chinese, and European digital empires, each advancing a competing vision for the global digital economy while attempting to expand its sphere of influence in the digital world.



message 3: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
CONSCIOUSNESS, ARTIFICIAL INTELLIGENCE, REAL INTELLIGENCE, QUANTUM MIND, QUALIA, AND FREE WILL

I discuss the relevant work of biologist and complexity theorist Stuart A. Kauffman on pages 91–92 of my 2021 book Free Will and Human Life (a PDF replica of which is online at https://www.academia.edu/108171849/Al...).

Kauffman and an Italian scholar, Andrea Boli, have recently published a paper titled “What Is Consciousness? Artificial Intelligence, Real Intelligence, Quantum Mind and Qualia” in the Biological Journal of the Linnean Society, 2023, 139, 530–38. This essay is freely accessible at https://www.academia.edu/110296461/Wh.... I have quickly read through this article once, and I will study it further. Although it is a scientific paper, the authors write in clear language that is not impossible for the layperson to understand. Here are a few relevant excerpts:
This short paper makes four major claims: (i) artificial general intelligence is not possible; (ii) brain-mind is not purely classical; (iii) brain-mind must be partly quantum; (iv) qualia are experienced and arise with our collapse of the wave function. . . .

[F]or the first time since Newton, a Responsible Free Will is not ruled out. In the deterministic world of Newton, Free Will is impossible. Given quantum mechanics, the result of an actualization of measurement outcome is ontologically indeterminate, but fully random. I have Free Will but not Responsible Free Will. If I can try to alter the quantum outcome and succeed, responsible free will is not ruled out. This, if true, is transformative. . . .

With a responsible free will, we are indeed beyond Compatibilism . . . .

Moral: AI currently is wonderful, but syntactic and algorithmic. We are not merely syntactic and algorithmic. Mind is almost certainly quantum, and it is a plausible hypothesis that we collapse the wave function, and thereby perceive coordinated affordances as qualia and seize them by identifying, preferring, choosing and acting to do so. We, with our minds, play an active role in evolution. The complexity of mind and coordinated behaviours can have evolved, and diversified with and furthered, the complexity of life. At last, since Descartes lost his res cogitans, mind can act in the world.

Free at last.
Independent philosopher Robert Hanna, a member of this group, first introduced me to the work of Stuart Kauffman a few years ago, and I invite him to comment on the above-referenced paper.

I am cross-filing the present post in the “Free Will” and “Artificial Intelligence” topics of this Goodreads group.


message 4: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
ARTIFICIAL INTELLIGENCE AND THE HUMAN MIND

I highly recommend the article titled “Subcontracting Our Minds” (https://www.academia.edu/125223176/Su...) by Timothy Burns, a political science professor who focuses on political philosophy.


message 5: by Alan, Founding Moderator and Author (last edited Aug 15, 2025 12:15PM) (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
IS AI READY FOR PRIME TIME IN THE COURTROOM?

As a long-retired litigation lawyer, I have questioned for decades the assertion that lawyers can and will be replaced by computers (now AI). Per this August 15, 2025 article (https://apnews.com/article/australia-...), it looks like AI is not (yet?) ready for prime time in the courtroom. I'm sure it can provide assistance, just as Westlaw and LEXIS have provided computer-based legal research assistance for both lawyers and paralegals since the 1970s (per my AI search, which confirms my recollection).


message 6: by Feliks (new)

Feliks (dzerzhinsky) | 1736 comments Legislators in New York (and probably elsewhere) flirt with the notion of banning it; or regulating it with warning flags like liquor and tobacco.

It's devastating arts & entertainment, despite the recent Hollywood strike where it was a specific bone of contention.

I can't help but wonder why the tech sector continually foists these 'advancements' on us which we never asked for and suffered no penury doing without. Instead of a boon, it's just another hobgoblin to grapple with.


message 7: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
Feliks wrote: "Legislators in New York (and probably elsewhere) flirt with the notion of banning it; or regulating it with warning flags like liquor and tobacco.

It's devastating arts & entertainment, despite th..."


"Generative AI" (having AI write one's papers, books, etc.) is, to my mind, a bad thing; I do all of my own writing without allowing AI to create it. However, I find AI helpful as a research tool. It often finds references that answer complex questions I have about history etc. It tells me in a few seconds what it used to take hours of my time to find in a library.

Like so many technological advances, it can be used for good purposes, or it can be abused.

I think banning AI is probably an impossible idea, similar to banning alcohol in the Prohibition era.


message 8: by Josu (new)

Josu Etxeberria (etxebe_22) | 4 comments Artificial intelligence should serve humanity as a whole, and not steal our humanity. Poetry, philosophy, and art are, among other things, what make us reflect on ourselves and express our way of seeing the world. We should not allow an automatism to replace us in these endeavors; rather, we should direct automatism toward freeing us from the heavy works that prevent us from developing in these disciplines. Yet the dark reality is that behind artificial intelligence lies an immense army of underpaid workers tasked with perfecting this automation.


message 9: by Feliks (last edited Aug 23, 2025 03:25PM) (new)

Feliks (dzerzhinsky) | 1736 comments [re: msg #7] Alan wrote: "Like so many technological advances, it can be used for good purposes, or it can be abused...."

It's true that technology often comes with "trade-offs" --faster cars or quicker meal-times, for example -- in exchange for an unknown percentage of new mortalities each year.

Of course some technologies are so plainly, unmistakably bad that they have always been banned. Such as, nerve gas.

But I ask: why does the public never get the privilege of referendum when any of these unknown risk factors are added to our lives? Why do ordinary folk never have input to development trends? Why is it always outside the democratic process?

Automobiles and highways were introduced to America in this way --underhandedly. No one asked anyone beforehand whether we wanted travel revolutionized. Unscrupulous private firms carried out the transformation without any let or hindrance. They took it for granted that sales would vindicate their rash step. We weren't even asked afterwards. A fait accompli,

I'd like to see more such banning (re: nerve gas) return --since ever more such reckless technologies continue to abound. Yes, I admit it's unlikely.

Setting aside the Arts and treating just the main engines of our culture (law, gov't, science, engineering): it still seems to me highly reckless if professional or business documents would no longer prepared by professionals in these respective fields.

(Computer programmers are in no way licensed professionals of any kind whatsoever.)

For example --if my freedom or my livelihood was being shaped in any way by a US court ? --I would scream like a panther if any part of the decision was authored by a programmer.

Thus, I'd be astonished if AI was ever given a footprint in the apparatus which actually runs our society. In the same way, I only trust licensed doctors to write my medical prescriptions.

At the minimum, algorithms introduce yet another reason for citizens to 'mistrust' language. The 'man-in-the-American-street' already being very inclined to such mistrust; adding a valid reason which bolsters his suspicions --this strikes me as foolhardy.

Widespread mistrust of rule-of-law; or rule-of-government can create dangerous political fissures.

So --"Let prejudice have its say", as it were --even though it emanates from an unabashed, "knee-jerk techno-phobe" (myself).


message 10: by Feliks (last edited Aug 23, 2025 07:25PM) (new)

Feliks (dzerzhinsky) | 1736 comments AEJ --Alan --I wonder if I can cordially and friendly quiz you a little bit about your philosophy here in this thread. It would be a rare treat and also highlight the theme of your series of post-legal-career books (which I admire). Is it OK to ask you questions here, which pertain to this?


message 11: by Alan, Founding Moderator and Author (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
Feliks wrote: "AEJ --Alan --I wonder if I can cordially and friendly quiz you a little bit about your philosophy here in this thread. It would be a rare treat and also highlight the theme of your series of post-l..."

I am working on completing my final book, Reason and Human Government, which will take me another few weeks. And, really, everything you need to know about my philosophy is contained in those books, all of which are/will be freely available in PDF at https://chicago.academia.edu/AlanJohnson. (They can also be purchased on Amazon in Kindle and paperback editions; all of the Kindle editions are priced at $2.99 USD.) And drafts of the introduction, chapters 1–4, and the epilogue of Reason and Human Government are even posted at https://chicago.academia.edu/AlanJohnson, pending the completion of this book, after which I will post the entirety of it. I am currently working on chapter 5, which is the last chapter.

So, I don’t have time to engage in a dialogue about my philosophy with you right now. I would suggest that you read my books first (freely, if you wish, at the above-cited link) and then, if you have any questions, you can ask me. In any case, I will not be able to discuss this until after Reason and Human Government is completed and in print (sometime in September or October, 2025).


message 12: by Alan, Founding Moderator and Author (last edited Nov 04, 2025 08:16AM) (new)

Alan Johnson (alanejohnson) | 5563 comments Mod
AI and JOBS

AI is already having a significant negative impact on employment. This is likely to get worse in the near and far future. The following October 13, 2025 article in Foreign Affairs magazine examines this problem and discusses possible ways to deal with it: https://www.foreignaffairs.com/united.... The article also explains how populist actions against immigrants and trade will not do anything to solve this problem but will instead make it worse.

Quote from the article: "Virtually every piece of research suggests that restricting immigration and trade will not stop companies from adopting AI. In fact, it may hasten layoffs. Reducing trade, for example, will raise input costs, shrink export markets, and heighten policy uncertainty—pressures that make labor-saving technologies such as automation more attractive in exposed industries."

Nonsubscribers to Foreign Affairs can access the entire article, without cost, by giving the magazine their email address for notices of future articles.


message 13: by Josu (last edited Nov 07, 2025 04:53AM) (new)

Josu Etxeberria (etxebe_22) | 4 comments Alan wrote: "AI and JOBS

AI is already having a significant negative impact on employment. This is likely to get worse in the near and far future. The following October 13, 2025 article in Foreign Affairs maga..."


(Couldn't read the Foreign Affairs article)

AI is just the natural evolution of human technology. Ever since the invention of the wheel, the goal of innovation has been to optimize human labor, making it easier and more effective. Since the Industrial Revolution, this has led to high unemployment rates, for example, when textile machinery replaced manual workers. However, the way our society has been built, that is to say, to produce not for humanity itself but for the sake of private profit, creates new and unnecessary markets. And paradoxically, human labor is always needed.

A singularity of AI is that it needs a huge amount of human labor to work. Not to mention that every piece of information that chat AIs provide comes from human-made sources. That is to say, for AI to work as a source of profit, companies need a whole external system of human labor supply, because the actual abstract profit always comes from salary. Here is the trick: a company owner makes a profit from their workers, but if these workers don't get a salary, the monetary profit doesn't make sense. Think about it, if you produce tons of, let's say, cars, your goal is to sell those cars. If nobody has a salary because everything is made by robots, you are producing literal waste if you are not willing to give those cars for free. How do you even calculate prices if money loses the role of mediator between commodities and labor? Even the single accumulation of money would be a waste in such a society of automated work. Not to mention that the main source of income of our beloved Western leaders (financial speculation) would result in a huge waste of time if things stopped having a price.

I am not personally against automating certain kinds of labor. I think that agricultural labor, for example, could get a huge improvement if we replace humans with robots/drones, etc. In my region, agricultural workers are usually illegal immigrants who are employed in dystopic labor conditions and for a ridiculous amount of salary. If work is often (if not every single time) alienating, because the worker is not aware (because everyone of us works to get a salary) of how their work is part of an abstract mechanism of global work that configures the whole global society in one way or another, agricultural workers in semi-slavery conditions are less aware about that their job is literally feeding us. A robot doesn't need a salary and doesn't get emotional, which means that everything produced by a robot follows the same pattern and, with some human supervision, can increase the product quality.

I do believe that erasing the competition between companies and countries, humanity as a whole, could organize society in a way where humans are liberated from hard labor and can work on things chosen by vocation and not by the urge for a salary. You can call me a utopian, but the facts are that automated work is leading us to a huge contradiction between Labor and Capital. And besides the ongoing conflicts, humanity is connected enough to take a new step towards global pacification and production oriented to the real human needs.

P.S. Marketing completely distorts human needs. The goal of marketing (which is the main organizer of the nowadays capitalist society in every single social aspect) is to satisfy an artificial need. This is bringing us to create new (and absurd) markets that absorb human capacities and resources just for the profit of the genius (99% of the time, rich) behind the campaign. Have you ever seen a product in a shop, and asked yourself: who the f*** buys this sh**? Through an efficient marketing campaign, probably somebody does. The truth is that we don't need many of the things we use on our daily basis, and we just use these things because we were convinced that we needed those things. We don't even need to have 8000 brands for the same product; a coordinated production would be enough. But this is probably a new topic.

P.S.2. Please, don't judge me because of my overusage of the word ''literally'' or ''literal'', I am a young, non-academic, self-taught, non-native English speaker.


back to top