The Hazards of Generative AI & Our Ethical Stance
What is Generative AI? đ¤ Letâs make sure weâre all clear on the definition. Generative AI is nothing more than a marketing term and could loosely be described as a subset of machine learning. Machine learning is itself a valid part of computer science and has led to many useful applications. Generative AI, however, is a marked departure from the rigor of computer science and is a set of algorithms and techniques (Large-Language Models, or LLMs, being one of the most common) used and abused by Big Tech to produce âsimulatedâ content. That is, instead of using machine learning to classify, search, or 1:1 translate human-produced contentâagain, valid use cases for the most partâGenAI attempts to simulate a human mind in the production of novel content. This has been likened to a âstochastic parrotâ: programs which extrude synthetic data out of massive training datasets that appears to be human-like to casual observers. Yet, in truth, it is ungrounded in epistemological reality and lacking all measure of veracity.
Now for our Generative AI ethical framework; our non-negotiable demands and our deep concerns. As I originally outlined in Episode 111 of my Fresh Fusion Podcast:
-
Generative AI tools must be 100% open. The claims of keeping them closed for safety/security reasons are bullshit. If these tools are unsafe (and in many cases they are), they must be legally regulatedâjust like hard drugs, weapons, tobacco, child pornography, etc. Due to this requirement of openness, I must reject all corporate operation of (and thus selling of) generative AI. Just like I donât pay corporations to sell me the Ruby programming language, or the ability to edit JPEG images, or any number of other vital programming and data manipulation tasks, I donât understand why I would pay a particular corporation for access to generative AI (or even if itâs somehow free-to-use, regardless that single corporate source must be accessed for the technology).
Note: While some âopen sourceâ models are now generally available which claim reasonable utility, they regularly fall down on the next two points. - Generative AI tools must be completely transparent about the sources they use and why. Black box algorithms are unacceptable. Anyone claiming they donât yet know how to make algorithms that arenât black boxes are simply revealing these algorithms canât yet be used ethically. Iâm aware thereâs ongoing research to find ways to backtrack from monolithic outputs to the variety of inputs involved, but as the years pass it becomes clear this is unlikely to be fully baked in any time soon.
- The sources Generative AI tools use must be 100% opt-in. There canât be any of this âwell, you should opt-out after the fact if youâre really worried about itâ. 𤨠All training datasets need to be 100% vetted, with all parties involved giving their consent and receiving reasonable compensation if indeed they wish to be compensated. It should be a matter of rote that this opt-in consent is easily verifiable by third-parties.
- Generative AI tools should be ânarrowlyâ purposeful. In other words, the general-purpose all-knowing, all-seeing, magical prompt machines which can âvibeâ virtually any output you might imagine (aka slop) are thoroughly unacceptable. Tools which can provide endless ânovelâ output are tools which are ultimately useless. This isnât anything like the reasoning capabilities of humans, or even the verified automation enabled by general-purpose computing. When it comes to AI algorithms, we need extremely targeted solutions if we are to trust anything coming out of them.
- Generative AI output should be tagged as AI-generated output, and it should be easy to trace how this output gets used throughout content pipelines. The notion that giant reams of text, or still imagery, or video, can be passed off as human-made or comprehensively integrated into something eventually human-made without any disclosure and possibility of verification, is thoroughly unacceptable. Slop being promoted online without proper disclosure is DESTROYING the fabric of the Open Web. I am constantly second-guessing if the art Iâm looking at is actually real or not, and Iâve been burned more than once (thinking Iâm following an artist and then it turns out theyâre just churning out regurgitated AI imagery). Blog posts featuring AI-generated imagery are simply awfulâŚI almost always leave the article behind and even unfollow people who do this habitually. Donât do that! đ
- Generative AI tools should be opt-in for users as well. I reject software which adds generative AI to its feature set without the ability for me to opt-out, much less opt-in in the first place. Forcing me to have enabled access to these tools is deeply offensive. Itâs even worse if itâs a job thatâs requiring me to use these tools as part of my job description. That would be as bonkers to me as saying you can only work at this job if you smoke, or drink alcohol, or carry a gun. That last one might make sense if, say, youâre a police officer or in the military or maybe in private securityâbut otherwise itâs thoroughly unacceptable.
- The environmental toll of Generative AI services has become alarming. These services, and massive data centers in general, are sucking up enormous resources in the sense of electricity usage and the water supply, semiconductor production requirements, e-waste over time, and other harms to local communities. This very real environmental cost has become likened to cryptocurrencies, which we already know are horrendously bad. While the models may be a bit less egregious on a case-by-case basis, the situation is certainly not ideal. Perhaps over time this issue will become a bit less critical as silicon processing power becomes more efficient, but that day is far off.
- Deskilling and cognitive decline, even psychosis, have become headline news. I canât claim I even saw this coming when I first sounded the alarm on the ethical hazards of Generative AI, but there is now copious evidence that people who regularly and credulously utilize these types of services exhibit cognitive decline over time. In some cases, those susceptible to emotional distress and mental health concerns are pushed to the brink of sanity, which has lead to physical danger and even death. At the very least, slop proliferation is wrecking communities, centering fascist propaganda, ruining creative growth, short-circuiting career paths especially for people new in their field, and generally wreaking havoc on the working class.
Just say no.
If, like me, youâre excited about working with fellow humans on human-created projects intended for other humans, I would be delighted to have a conversation with you! đđ
In addition to my professional work as a software developer and open source advocate, I write several newsletters & blogs in technology and creative spaces.
- That HTML Blog
- Fullstack Ruby
- The Internet Review
- Cycles Hyped No More
- Vibe Coded Podcast
- Fresh Fusion Podcast
And you can discover more about me personally, along with travel essays, photography, and music, by visiting jaredwhite.com