But I think we all have a responsibility to make these ethical decisions. If you are adding to the AI user base, feeding it your data and training it and pumping its figures, there are a variety of legit reasons to do so, from curiosity to convenience to FOMO. And if you don’t care about the environment or wider impact of your actions, alright, rock on. We can’t all care about everything all the time.
AI doesn’t have to be a corporate nightmare. It can be self-hosted, or hosted by a third party that doesn’t log or train on anything (which is extremely common).
And if you’re concerned with the relatively marginal energy use of inference; use Cerebras or Groq (not Grok) or Huawei, to get away from Nvidia and the crazy clocks they run chips at. Or even just something like GLM or Deepseek that’s deployed with quite reasonable efficiency, on peanuts for hardware.
The fundamental issue is corporate enshittification, basically. And that the vast majority of folks, unfortunately, do take the shitty Big Tech options because they’re the loudest, and no-one knows there are better, cheaper, and nicer options.
AI doesn’t have to be a corporate nightmare. It can be self-hosted, or hosted by a third party that doesn’t log or train on anything (which is extremely common).
And if you’re concerned with the relatively marginal energy use of inference; use Cerebras or Groq (not Grok) or Huawei, to get away from Nvidia and the crazy clocks they run chips at. Or even just something like GLM or Deepseek that’s deployed with quite reasonable efficiency, on peanuts for hardware.
The fundamental issue is corporate enshittification, basically. And that the vast majority of folks, unfortunately, do take the shitty Big Tech options because they’re the loudest, and no-one knows there are better, cheaper, and nicer options.