LLM Absolutism?
How do you square away the ethics of using an LLM? I'm wrestling with how to responsibly engage with this technology but my unease with everything from environmental impeacts to shady model training keep me from feeling like I can engage responsibly.
The water use alone is enough to make me feel uneasy. At the same time, I live in a house powered by natural gas. I don't have alternative energy sources so saying that I'm environmentally aware of the costs falls a little flat with the rest of my life being equally as consumptive in other areas. Does that make it okay to go ahead and use ChatGPT or similar because I'm not low-impact in other areas?
I think the unease is in that using an LLM is optional while powering my home is not. I can see use in using an LLM to brainstorm as Simon Willison describes in a talk he gave in August 2023:
If you’ve ever struggled with naming anything in your life, language models are the solution to that problem. ... When you’re using it for these kinds of exercises always ask for 20 ideas—lots and lots of options. The first few will be garbage and obvious, but by the time you get to the end you’ll get something which might not be exactly what you need but will be the spark of inspiration that gets you there.
After reading, I tried this. My kids need to do a small inquiry project each year in school, so I opened ChatGPT and asked it for some ideas on inquiry projects a 5th grader could do on exercise. It actually gave me a couple of ideas that went beyond demonstrating proper stretching technique.
So, the potential for this kind of assistive work is more interesting to me. I know as a teacher that I'm supposed to be intersted in the automatic YouTube quiz creator or the worksheet generators, but those are the lowest fruit, just above the whole "have AI give students feedback" mess that's starting to come out. I'm more curious about interactive LLMs as a rubber ducking tool to help me think better, not just try to offload cognitive effort that I should be engaging in personally.
And yet...I feel like using any of the available options makes me a willing conspirator to intellectual property theft. It's clear these companies used resources in secret and then released their programs because had they disclosed their work, it wouldn't have been allowed on the grounds of copyright. Tech is doing what they want and then using obscene amounts of money to deal with the legal issues after the fact. That's not okay.
I don't have any insight or answers - I'm mostly shouting into the void. I think I'm going to continue to read and think carefully about what technologies I choose to engage with and wrestle with personal convictions along the way. Maybe as technology improves, there will be some more models created which aren't as environmentally costly (working slower is always an option, you know) or as ethically shady as some of the big players are now.
And maybe that's the point - how we think about the issues as we come to decisions means more than the decision we end up making.
Previous
Impossibly Blue
A thought from Seth
2024-03-11 12:33:12This is also a dilemma that I have considered many times. The challenging thing about all of this is that the model training is what takes up a vast majority of the resources. You will not be able to directly control (beyond consumption metrics) the training schedule of these models. However, you can manage the consumption side. You could attempt to minimize your footprint when using a LLM. An example would be to run a lower footprint LLM on a rpi or something (https://github.com/garyexplains/examples/blob/master/how-to-run-llama-cpp-on-raspberry-pi.md). This will likely lead to worse results (but how much is variable), and you will be in full control of the power consumption and run time. Might be worth an experiment.
A thought from Sean Randall
2024-03-11 12:58:38the LLM boom has transformed large parts of my life because of being blind. At home, I'd be unable to help with my daughter's math or geography homework; all thee apps and quizzes are presented visually. At work I'd never know if my face was centred on camera, if the lighting in the room when filming was good, if the cover images of my Podcast episodes scaled well. And I'd miss out on so many memes and jokes on social media, family photos, and water cooler chuckles because of things on the wall. All of this has changed because of LLM's. Originally I was just happy to have access to GPT4, but I have become more environmentally aware and am now doing a lot more processing locally. Speed is rarely an issue and I've modified my pipeline so I can choose to send an image off to the web if I want a quick reply, or keep it internal to my own network with a locally hosted model if not. I haven't yet managed to do this on my phone, there's an app for the blind called Be My Eyes which integrates GPT. i'd like to use our network's Pi Hole to parse those out and return local data if possible, but that's a big project. But I just wanted to chime in here with a brief comment from the disability perspective. :)