Debating AI

Dear lord... a media personality steeped in Kings basketball should not need ChatGPT to generate a rebuild plan. 😖
All it does is scour the web for the most repeated answers and push out the results in conversational/essay format. I don't get why anyone ever says "I asked ChatGPT/Grok" anything... but yeah, definitely think Dave could have either fan sourced this and summarized himself or given his personal plan.
 
Come on, a LITTLE credit.

I know what MY plan would be, I’ve been saying it very publicly for a while now.

The fun of the exercise was to see just how aligned (mostly) the computer was. There were some errors (overvaluing the amount of picks we’d get for example), but the philosophy was pretty spot on.

It also included a sample press conference from the GM and talking points to share with fans.

Just an exercise for what is surely going to be a long and arduous season. Not that serious.

Some of us who are writers don't find the idea of generative AI very funny. These companies are literally stealing people's life's work to train their language models and then playing dumb about it when it comes to compensation. And for those who are not writers, not to worry they're also stealing your resources to power their data centers while charging you for the privilege.

It's happening and will continue happening regardless of what I think about it but nothing about this business model is harmless and I really wish more people realized that. And if you aren't terrified by the idea of private companies telling you "relax and let us do your thinking for you" because you're already educated and capable of performing your own research and critical analysis then consider the fact that they're in every school in America now the same way Coke and Pepsi were when I was a kid.
 
Come on, a LITTLE credit.

I know what MY plan would be, I’ve been saying it very publicly for a while now.

The fun of the exercise was to see just how aligned (mostly) the computer was. There were some errors (overvaluing the amount of picks we’d get for example), but the philosophy was pretty spot on.

It also included a sample press conference from the GM and talking points to share with fans.

Just an exercise for what is surely going to be a long and arduous season. Not that serious.

Most of what I would offer in response was already offered by @hrdboild. But I'll expand slightly.

These technologies are glorified plagiarism machines. As a writer, I've had my work stolen before, and it's a miserable feeling to realize that someone else thought they could get away with passing off my ideas as their own. What we refer to as "AI" is no less insidious. In fact, I'd argue that it's more insidious, since it pretty well covers its tracks. It's vacuumed up copyrighted works and non-copyrighted works alike, and the average user simply does not fathom the scale of theft (nor the environmental cost) of engaging an "AI" like ChatGPT.

More to the point, these companies have argued before legislative bodies around the world that their "AI" models would not be financially viable if they had to ask permission of every creative individual and license use of every creative work in order to "train" their supposedly transformative "AI" technologies. If your business model cannot be supported without theft, then it's no business model at all.

I know I'm swimming upstream with this, and most everyone wants to tell me that "this is the future" and "there's no stopping it" and that I should roll over and just let it happen. But I do not countenance the outsourcing of one's thinking to the machine. Ever. Not for work, not for play, nor for sh*ts, nor for giggles. Especially when it's such a morally bankrupt enterprise in every sense imaginable. OpenAI and companies like it should be launched into the sun with nary a tear shed for their obliteration.
 
I agree with the points on AI but I also think we're being the fun police as it pertains to Dave.
I think media has a responsibility to use it more judiciously than anyone on this forum. I had my "a-ha" moment when I stopped using it, and I will admit there are a few very rare instances where it may be the only option but using it as a google replacement or to write a think-piece just means more and more people will normalize and fall back onto it when critical thinking is required. The fact that almost any "how-to" or even requests for historical context are often riddled with errors is extremely alarming beyond all the plagiarism aspects.

Considering what I am capable of generating at home, offline, on a basic M series Mac these days, I am not sure if some of the "thanking Grok takes more energy than charging 5 Cybertucks!" are exaggerations, but collectively the wasted resources are not something we should ignore.
 
I agree with the points on AI but I also think we're being the fun police as it pertains to Dave.

Hard to make the logical leap from "yeah, I agree with the points on the insidiousness of AI" to "eh, it's just a bit of fun". OpenAI wants users to "have fun" with ChatGPT. They want a culture to form around their technology where users see it as harmless in the same way that the production of memes is considered harmless. If millions are using ChatGPT for seemingly innocuous purposes, and if it becomes integrated into their daily experience of "being online", it's easier for OpenAI to justify the existence of a technology that was built on grand larceny.

Silicon Valley investors are absolutely dumping money into "AI" tech. OpenAI and others are operating at extraordinary losses. These firms are desperately hoping that they can find money-making use cases for this technology so that they can deliver on these investments. The American economy is mostly being propped up by AI speculation right now, and if that bubble bursts because the courts deem these activities illegal or the users determine that these technologies are not worth the resources being thrown at them (while they continue to watch their energy bills rise dramatically), it's going to have some nasty downstream economic effects. Despite all of their bullsh*ttery, Sam Altman and the CEO class isn't going to feel it. But the American people will.

It's easy enough to adopt an "it's all fun and games" kind of posture, because nobody enjoys being the scold telling the neighbors at the party next door to keep it down. But again, the average user doesn't recognize the theft at work behind their every "AI" query, and every industry is rushing headlong into "Web 3.0" for fear of missing out, so much so that there's a shocking lack of education and common sense around these technologies, how they function, what their limitations are, the infringements that led to their initial development, and what their continued development represents to the future of entry level jobs, the economic health of the creative class, and the outsourcing of critical thinking.
 
This AI tangent is interesting, and perhaps we can spin it off in another thread, but for the time being:

I can tell you how pervasive it is in education. While there is pushback from educators at large, there is concern among admin and families about "being left behind", which is causing schools to wholesale adopt without the thinking of logical/ethical reasoning behind it. The amount of resources wasted to create a video of a cat freaking out while its owner cuts a cat-shaped cake open is absurd, and something that we should not be partaking in, but, without thought to consequence, here we are.

Amusingly enough, in my current role, I will be attending a symposium on AI in a few weeks, so I'll be able to get a deeper pulse of what the general feeling is across the country, at least as to how it pertains in academia. I don't think it'll be positive, but at the same time, private industries are wooing governments for resources to keep the lights on, so barring some sort of worldwide catastrophic AI-induced event (like a world war waged for water, or some Skynet type thing), I think it's going to be here to stay.

Similarly related, there were a lot of people concerned about the adoption of technology at early ages, and while there are some benefits, I think we're seeing some academically-related drawbacks, like increases in dysgraphia, equity, overstimulation...etc.

All that is to say, I think it's "neat" to see that AI can come up with a plan to save the Kings, but that's about it. If CD wants to use it to see how his ideas fall in line with a machine's general assessment, I don't really care. If he or other media personalities are presenting AI's thoughts as their own, then it's the plagiarism of the ideas of many presented as their own thought.

We've seen clickbait articles presented this way, and thankfully, are able to cast them aside. For now.
 
This AI tangent is interesting, and perhaps we can spin it off in another thread, but for the time being:

I can tell you how pervasive it is in education. While there is pushback from educators at large, there is concern among admin and families about "being left behind", which is causing schools to wholesale adopt without the thinking of logical/ethical reasoning behind it. The amount of resources wasted to create a video of a cat freaking out while its owner cuts a cat-shaped cake open is absurd, and something that we should not be partaking in, but, without thought to consequence, here we are.

Amusingly enough, in my current role, I will be attending a symposium on AI in a few weeks, so I'll be able to get a deeper pulse of what the general feeling is across the country, at least as to how it pertains in academia. I don't think it'll be positive, but at the same time, private industries are wooing governments for resources to keep the lights on, so barring some sort of worldwide catastrophic AI-induced event (like a world war waged for water, or some Skynet type thing), I think it's going to be here to stay.

Similarly related, there were a lot of people concerned about the adoption of technology at early ages, and while there are some benefits, I think we're seeing some academically-related drawbacks, like increases in dysgraphia, equity, overstimulation...etc.

All that is to say, I think it's "neat" to see that AI can come up with a plan to save the Kings, but that's about it. If CD wants to use it to see how his ideas fall in line with a machine's general assessment, I don't really care. If he or other media personalities are presenting AI's thoughts as their own, then it's the plagiarism of the ideas of many presented as their own thought.

We've seen clickbait articles presented this way, and thankfully, are able to cast them aside. For now.

As a fellow educator, I can sympathize. In the past three years, I've attended no fewer than four conferences and served on no fewer than three committees related to AI deployment in the classroom and its effects on student learning, and the basic takeaway is: this is bad for educators and bad for students, but higher ed administrators are going to adopt it anyway for fear of "being left behind", so we're all left to deal with the fallout with little in the way of administrative support for the consequences that bear out from inviting a nascent, immature technology into a vulnerable educational environment.
 
Honestly, society fell for the snake oil tactics of "AI" companies just a bit too hard in our current witch hunting era, and now we collectively resent "AI" more than we should.

Despite all the hype, this isn’t Artificial Intelligence at all. It’s essentially an advanced search engine with customizable parameters, not a thinking machine.
 
Honestly, society fell for the snake oil tactics of "AI" companies just a bit too hard in our current witch hunting era, and now we collectively resent "AI" more than we should.

Despite all the hype, this isn’t Artificial Intelligence at all. It’s essentially an advanced search engine with customizable parameters, not a thinking machine.
Your last point is valid in the sense of how AI seems to be used when not creating anything artistic - Google automatically uses AI during its searches, and seems to curate a reasonably accurate list of sites to choose from under the guise of "answering a question."

I think what I find most problematic (from a creation standpoint) is its use of creativity related to the arts - things that we take for granted as being "human" endeavors. Writing songs, painting, creating a story. If we offload creative thought as well as rational thought - what's left?
 
1. I write as well
2. Toothpaste is out of the tube with AI. This is the “you won’t always have a calculator in your pocket” convo of 2025. We adapt or we become obsolete.
3. I have written, spoken, and used every form of communication possible to outline MY rebuilding ideas. This was an exercise to see how close the AI would come to agree or disagree. But ultimately not that serious.
4. I have not seen this exercise “reported as fact” elsewhere, and would ask for a link to an example. Not my conversation last Tuesday where I outlined on my show the Kings were entering a rebuild phase. That WAS reported, and had nothing to do with AI.

Interesting convo
 
I'm not sure if this is what you're asking for, but here's one:


After another quick poke, I found this:


I guess my point is that it DOES exist, and how are we as society going to handle it? Do we use AI to fact-check AI? To what end?
 
Last edited:
I know Stanford has built their own ChatGPT which goes to show you that at least one of our top universities does not want any of their work going to these tech companies.

As an educator I think acceptable uses would be to create a quiz or to create a fictitious character background for the patient in one of my medical cases. Both of these would still require a content expert to review for appropriateness and accuracy, but it is a solution for busy work that doesn't compromise the actual learning objective or the exercise itself. I think using it to submit or grade work is highly problematic and yet both are constant.

Using it to check grammar is something I thought harmless but you're just giving it your work. I would give it an abstract for a poster or short presentation I was working on and ask it for a catchy title. Since that stuff ultimately got published it's not a huge deal because they might get it anyways, but it is something that upon learning Stanford and others have their own LLM AI systems was a huge a-ha that maybe I won't do this in the future.

As for journalism though, it is becoming rampant. Just like staff writers were replaced by newswires and freelancers paid on piece-rate, those freelancers are now being farmed to AI. And it is not fact checked so we are seeing lots of made up information repeated as fact when these pieces go viral. I actually think it is even more harmful when it is disclosed because it is normalizing it as a replacement for real journalism and research.
 
My extent of using AI/ChatGPT has ranged from asking the platform to help brainstorm ideas for potential writing exercises (i.e. a novel or a series of novels) to asking for ideas on how to enhance an Excel spreadsheet/workbook.

I've always had somewhat of an interest in writing, but have never been able to get past "writer's block". I've used the platform to help me formulate ideas, characters, plot, symbolism, etc. I've also asked it to provide sample chapters and/or paragraphs. But just to see what it produces. Honestly, I've been rather surprised at how detailed it can go, and how much it can produce. But, of course, I'd never use it to ACTUALLY write a full-length story/novel. But, if nothing else, if it can provide ideas in order to guide me in the right direction, I don't mind using it. Also, when I say "help me formulate ideas", I am not talking about original ideas, but rather ideas that I already have in my mind that I need a little bit of help putting down on paper.

The Excel/spreadsheet enhancements are primarily formula-driven (i.e. what kind of formulas can I use to accomplish what I would like my spreadsheet/workbook to accomplish)...

Nothing too crazy (honestly). And it definitely is not a platform I would ever use to produce material(s) to be used in the real world.
 
GPT was used extensively in my son's computer science class. As someone who can break down code and massage it to do what I want I wish it was around when I was writing python and shell scripts for image processing and file storage.

I am pretty ok with Excel but do wonder if it could help me write some advanced formulas from tab to tab. I'm going to have to try it sometime as I put together some pretty complex spreadsheets where I really put about 30 variables into them that get distributed to 12 tabs. hmmm.
 
GPT was used extensively in my son's computer science class. As someone who can break down code and massage it to do what I want I wish it was around when I was writing python and shell scripts for image processing and file storage.

I am pretty ok with Excel but do wonder if it could help me write some advanced formulas from tab to tab. I'm going to have to try it sometime as I put together some pretty complex spreadsheets where I really put about 30 variables into them that get distributed to 12 tabs. hmmm.

This I think is the most appropriate use for AI -- have it handle the busy work of stress-testing your code. Programmers are well-equipped to build tools which save themselves time and if that means there's less programming jobs out there and the level of efficiency for that smaller pool of programmers has been elevated, fine. There are other jobs which need doing.

...

I mostly resent what is happening on the creative end as @Spike pointed out. We've already seen tech companies usurp control of the music industry from the actual artists to the point where it is no longer a viable profession for 99% of the population. Making music is a part-time hobby only now unless you're the rare one in a million artist able to coalesce your talent, image, message, and self-promotion skills all into one marketable product (yourself). And even if you manage to reach that threshold you're competing with a larger and larger pool of AI generated nonsense. That's point one. Had I known that downloading music for free in High School was ultimately going to lead us here I would hope that I would have exercised more restraint and tried to think long-term.

Point two is that AI is also in the process of dismantling film production as a viable profession for most people. For a long time aspiring filmmakers got their start in advertising or (after the birth of MTV) music videos because that's where all of the money was. You could make a living working for advertisers long enough to hone your skills and then attempt your moon shot to become an actual narrative filmmaker. Or you could go into documentaries and industrial training videos if your interests lean more non-fiction. In short order very few of these jobs will exist because businesses don't particularly like spending money when they don't have to and the tools are "good enough" to slap your brand name on and push into the marketplace in a time when just about every business is looking to cut expenses. Less actual productions means less full-time technicians are able to survive in their current roles which is going to drastically reduce the talent pool. Film studios are already rapidly being bought up by tech companies eager to expand their reach into "cultural consensus building". Where once we had art we will instead have corporate propaganda masquerading as such.

For long-form writing -- it takes time to develop a sense of style and it also takes courage to constantly grapple with the empty page. What happens if we normalize AI generated writing is that this time goes away. Entry level jobs for people who have advanced language skills are being eaten up faster than anything else because the people building these tools think in terms of balance sheets and have no respect for language as an art form. As a result would-be writers will put most of their time into working day jobs as the treadmill of inflation continues to make the idea of "taking a break to write" impractical for anyone who can't count on the safety net of inherited wealth and that rich tapestry of human experience is going to get flattened.

And I truly believe that any writer who uses these tools as part of their creative process is going to short-cut themselves right out of developing their own voice. The calculator analogy doesn't hold up for me because the end goal of a math problem is to get the right answer. Any tool which gets you there faster is not impeding on the goal. There is no "right" answer at the end of a writing project there is only whatever spirit you can imbue the page with. Whatever you produce with generative AI as a crutch is still going to come from your own mental effort but my experience has been that when the level of effort is lowered so too is the plateau of what you can achieve.

I don't know that basketball analysis is at the level of art but it certainly overlaps with it as a blending of applied data and intuition focused toward the communication of an idea. A lot of the content produced by these AI tools is laughable in that it is clearly the work of a synthesizing model which looks for and repeats patterns but does not have the faintest idea what any of this means. But like the pirated music and movies of the previous two decades, there is a very real chance here that we've brought in the trojan horse which will make everything a lot worse and by the time we realize it the economic model will already be damaged beyond repair. How big of a leap is it for an AI company to claim copyright of anything that you've created with their tools? Right now we're only at step one of the Walmart-ification of art: undercut the market to force your competition out of business. Once they have squeezed everything non-AI out of the marketplace the sky is the limit to the level of draconian control that can be imposed. Real art will continue to be made of course provided you are committed to a life of struggle and marginalization.

So ultimately the only way out of this death spiral as I see it is to tell these tech companies that we don't want what they have to sell us -- not their subscription based music services nor their bloated social media apps. Not their their algorithmically sourced streaming video content nor their generative AI tools. And set up alternative platforms marketing human created content to humans who want it.
 
I think music was always a losing game. There's a few Taylor Swifts out there, but even she had horrible deals on her first albums that she resorted to suing for 100% of publishing and then re-recording everything.

Film is interesting, when I was in the industry we laughed at video game workers because film (especially a flagship studio like the one I was at) paid great and once you were in you could always find work and studio hop. Eventually studios banded together and put a stop to it, signed a secret no-poach agreement, and killed all the fun and profit for workers. Now my friends that stuck around all work for ... video game companies. lol

My two biggest concerns are simply intellectual theft and spread of misinformation. It was bad enough when it delivered convincingly authoritative printed work that was factually incorrect, now that you can prompt in text and get 1987 Running Man type of convincingly real video showing people doing horrendous things what are we even doing here now people?

AI tools that don't do those things are fine by me. Something that will listen to my music track and a reference track and try to balance and master it similarly? Great. Photoshop tools that remove obstructions and replace with what they think is behind them? Awesome. There's so much valid use but so much nefarious use going on which often gets by with using fun and games to trick users into turning over all sorts of data to these companies to scrub through. Hey this app will make me into a cartoon, sure it can use my entire photos library... are we sure they didn't just download 10 years of your photos that they'll train for god knows what? Probably not for making cute office memes.
 
I think music was always a losing game. There's a few Taylor Swifts out there, but even she had horrible deals on her first albums that she resorted to suing for 100% of publishing and then re-recording everything.

Film is interesting, when I was in the industry we laughed at video game workers because film (especially a flagship studio like the one I was at) paid great and once you were in you could always find work and studio hop. Eventually studios banded together and put a stop to it, signed a secret no-poach agreement, and killed all the fun and profit for workers. Now my friends that stuck around all work for ... video game companies. lol

My two biggest concerns are simply intellectual theft and spread of misinformation. It was bad enough when it delivered convincingly authoritative printed work that was factually incorrect, now that you can prompt in text and get 1987 Running Man type of convincingly real video showing people doing horrendous things what are we even doing here now people?

AI tools that don't do those things are fine by me. Something that will listen to my music track and a reference track and try to balance and master it similarly? Great. Photoshop tools that remove obstructions and replace with what they think is behind them? Awesome. There's so much valid use but so much nefarious use going on which often gets by with using fun and games to trick users into turning over all sorts of data to these companies to scrub through. Hey this app will make me into a cartoon, sure it can use my entire photos library... are we sure they didn't just download 10 years of your photos that they'll train for god knows what? Probably not for making cute office memes.

And ironically enough, even the Taylor Swift fans don't want to listen to Taylor Swift anymore after her latest album!

You make a lot of good points here. Economic models are always going to change so I suppose the goal there is to make sure they're changing in a way that serves all of us not just the few at the top. We need support for that cause in at least one branch of government to make that happen.

And I also don't see a problem with tools which help to streamline tedious tasks that often get in the way of creativity. I don't want an AI model editing my movie but if it can sort all of the clips into bins by metadata and put them in a chronological timeline so that I can quickly search by scene and review them take by take, that allows me to get to the work of being creative faster so I'm all for that. I don't hate any imposition of technology into the creative process -- many of these mediums we work in today wouldn't even exist without technological progress after all. It all just comes down to how it's used and who is in control of it.
 
some good pieces of talk going on in here, I do wonder how many countries in the world are implementing this in their educational system right now?
 
And ironically enough, even the Taylor Swift fans don't want to listen to Taylor Swift anymore after her latest album!
I never thought that timeline would exist and yet here it is.

It's funny the people pushing AI the hardest keep claiming we'll never have to work after our robot overlords take care of everything and instead we'll all get paid a basic income but I think we all know how that would work. I imagine your "basic income" would be a factor of how many robots you can afford.
 
Back
Top