AI: Predatory Practices Disguised as the Future of Technology 

By: Samuel Geronimo

In theory, a tool that helps you summarize data and can generate images, among other things, at a rather fast rate sounds like an excellent alternative for research. In an ideal world, where we all essentially have the world at our fingertips, and are privy to almost any information we want at a moment’s notice, you’d think tools such as AI could help us further connect with one another and help mankind. However, this is not an ideal world. At the forefront of any useful tool, is exploitation. With exploitation comes behavior unbecoming that of a decent human by he who is being exploited. You cannot build systems that feed on engagement, among other prosumer-friendly principles, and expect said users to act with decorum on these platforms. Conversely, if you play in s***, you will smell like s***. 

Recently, there was an interesting trend going on in the AI community. Civil rights leaders such as Martin Luther King Jr were being depicted on the Maury show spewing profanities, mocking his famous “I Have A Dream” speech, wearing a MAGA hat, sporting flashy jewelry, and acting like the stereotypical black man many envision in 2025. CBS News had this to say, “OpenAI is temporarily blocking users of its Sora 2 AI video app from making content that includes Martin Luther King Jr.’s likeness after some people created what the technology company called ‘disrespectful depictions’ of the civil rights activist.” Disrespectful depictions are the most lighthearted way to frame what is happening. We are seeing free will being exercised at levels ignorant enough to leave even the most brain-rotted individuals gobsmacked. This is significant because this is just a microcosm of the predatory practices that AI systems engage their users in.

 It seems like nobody is safe from becoming a capitalistic concubine of AI. Not even children. In an article titled “AI Is Being Trained on Images of Real Kids Without Consent,” by Futurism, Maggie Harrison Dupre had this to say: “The technology is developed in such a way that any child who has any photo or video of themselves online is now at risk,” Han continued, “because any malicious actor could take that photo, and then use these tools to manipulate them however they want.” It’s also worth noting that many of the images discovered were sourced from web content that few folks online would ever stumble across, like personal blog posts or, per Wired, stills from YouTube videos with extremely low view counts. In other words, AI is being trained on content that wasn’t necessarily designed for mass public dissemination.” We must ask ourselves as a society when enough is enough. If evidence of exploiting little kids isn’t enough to stop these egregious empires, what will ever constitute as enough? Will the black hole just keep growing? This is important because many AI users often contribute heavy engagement to these platforms without realizing the real-life implications that something as simple as watching a Sora-generated video might have. Namely, because AI uses real people in their databases to generate images or videos of what is being requested by a user, you or your loved ones could very well be inside said algorithms to generate a video for someone else. These practices have contributed to the parasocial ecosystem that exists today. 

The need for greed associated with AI has had revulsive ramifications on our actual ecosystem. Surely we’ve all heard that AI use has had an impact on water, of all things. Namely, according to Data Centers and Water Consumption: “Data center developers are increasingly tapping into freshwater resources to quench the thirst of data centers, which is putting nearby communities at risk. Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people. With larger and new AI-focused data centers, water consumption is increasing alongside energy usage and carbon emissions.” As if this weren’t enough, Elon Musk’s “Supercomputer” project in Memphis exemplifies an issue that often arises when there are catastrophic levels of capitalism disguised as progress and symbols of hope. Musk’s new facility is a prime example of environmental racism. In an article from the Tennessee Lookout titled “A billionaire, an AI supercomputer, toxic emissions and a Memphis community that did nothing wrong”, Ren Brabenec had this to say: “It’s no coincidence that if you are African American in this country, you’re 75% more likely to live near a toxic hazardous waste facility,” said state Rep. Justin J Pearson, a Memphis Democrat, in a recent interview. “It’s no accident that in this community, we’re four times more likely to have cancer in our bodies. It’s no accident that in this community, there are over 17 Toxics Release Inventory facilities surrounding us — now 18 with Elon Musk’s xAI plant.”

Despite vocal opposition from South Memphis residents and their defenders, these neighborhoods are beginning to look like “sacrifice zones,” or poor, predominantly Black communities that are willfully poisoned and polluted for the interests of power and wealth. If you don’t live in Memphis, this may not strike you as important; however, how long until more of these facilities start going up in places like Staten Island, Newark, Chicago, Los Angeles, and Atlanta? Will you care when you or a loved one falls victim to these vulturine vicissitudes? In an article from Politico titled ‘How come I can’t breathe?’: Musk’s data company draws a backlash in Memphis”, Ariel Wittenberg had this direct quote from a Memphis resident: “I can’t breathe at home, it smells like gas outside,” Boxtown resident Alexis Humphreys said through tears, holding up her asthma inhaler during a public hearing about the turbines on April 25. “How come I can’t breathe at home, and y’all get to breathe at home?” 

I would be remiss to not mention the impact of AI use on the critical thinking of those who use it. Furthermore, an unprecedented repercussion seems to be a halt to thinking as a whole. An almost complete shutdown of the brain. I spoke with College of Staten Island Professor/Faculty member Emma Johnson, she had this to say: “I’ve gotten emails that were clearly not written by a student, sometimes things like asking for an extension.” Have we strayed so far from real life that we use AI to write emails for us? At the risk of sounding critical of my peers, I’d like to acknowledge that I have heard of companies using AI to mock up emails, among other menial tasks. However, again at the risk of sounding critical of my peers, I’d like to point out that these students subletting part of their brains to AI, are not running Fortune 500 companies. Professor Johnson goes on to say this about AI use: “We don’t go to college because it’s easy, right? You go to grow and to learn. So I am concerned that it could be, basically, holding people back from reaching their own potential and from learning. That’s my concern for individual students. I have another concern, which we’ve talked about before, which is this distrust that I think is interjecting between professors and their students, where we’re constantly distrustful now of any written communication that we have.” On that note of distrust, I recall a time last semester where I submitted an assignment for professor Johnson, and she was concerned I used AI in my work. I didn’t know whether to feel venerated or offended. 

Fear of missing out is at the core of most AI use. No one wants to feel left out. Not excluding many conglomerates and business tycoons alike. AI wash refers to deceptive marketing tactics. They can consist but are not limited to; promoting a product or service by overstating the role of artificial intelligence and the integration of it. Namely, it’s been shown through recent new studies that just mentions of artificial intelligence by a company can raise their stock instantaneously. This means for the consumer that although companies are branding themselves as “implementing AI” in their company’s usage, more often than not, it’s something just as simple as writing up emails or setting reminders. Companies are not using AI to do anything groundbreaking. Professor Johnson also shared her opinions on AI wash. She had this to say “I have some skepticism that we may be in a bit of an AI bubble where people have invested a lot of money in the system and they’re dying to find a way to use it.” 

Change is inevitable, the most important skill in life is adaptability. Be that as it may, there is a fine line between adaptability and conforming to norms set by those who have aspirations of becoming trillionaires. I have no doubt AI use will continue, in fact it may even rise. A word to the wise- if you must use AI, use AI, don’t let AI use you. Be cognizant of the ramifications of using systems that are in large part, causing much more harm than good. Systems that feed off nothing more than engagement not enrichment. Systems that feed off scalping the critical thinking skills of its users. The next time you see an AI generated video on social media, keep in mind that merely interacting with this content, allows AI leaders to colonize our communities and our minds.

Leave a Reply