> An overall failed attempt at using AI
> I attempted to use AI to try out the process, mostly because 1) the industry won't shut up about AI, and 2) I wanted a grounded opinion of it for novel projects, so I have a concrete and personal reference point when talking about it in the wild. At the end of the day, this is still a hobbyist project, so AI really isn't the point! But still...
> I believe in disclosing all attempts or actual uses of generative AI output, because I think it's unethical to deceive people about the process of your work. Not doing so undermines trust, and amounts to disinformation or plagiarism. Disclosure also invites people who have disagreements to engage with the work, which they should be able to. I'm open to feedback, btw.
Thank you for your honesty! Also tremendous project.
The funny thing is the phrasing used to be more neutral, but I changed the tone to be slightly more skeptical because people thought I was just glazing AI in my post. Another guy on Reddit seemed annoyed that I didn't love AI enough.
I just wanted to document the process for this type of project. shrug
It seems to me that AI is mostly optimized for tricking suits into thinking they don't need people to do actual work. If I hear "you're absolutely right!" one more time my eyes might roll all the way back into my head.
Still, even though they suck at specific artifacts or copy, I've had success asking an LLM to poke for holes in my documentation. Things that need concrete examples, knowledge assumptions I didn't realize I was making, that sort of thing.
I dunno about the need for disclosure in this way. In my working life I’ve copied a lot of code from stack overflow, or a forum or something when I’ve been stuck. I’ve understood it (or at least tried to) when implementing it, but I didn’t technically write it. It was never a problem though because everybody did this to some degree and no one would demand others disclose such a thing at least in hobby projects or low stakes professional work (obviously it’s different if you’re making like, autopilot software for a passenger plane or something mission critical, that’s notwithstanding).
If it’s the norm to use LLMs, which I honestly believe is the case now or at least very soon, why disclose the obvious? I’d do it the other way around, if you made it by hand disclose that it was entirely handmade, without any AI or stackoverflow or anything, and we can treat it with respect and ooh and ahh accordingly. But otherwise it’s totally reasonable to assume LLM usage, at the end of the day the developer is still responsible for the final result, how it functions, just like a company is responsible for its products even if they contracted out the development of them. Or how a filmmaker is responsible for how a scene looks even if they used abobe after effects to content aware remove an object.
I disclosed AI because I think it's important to disclose it. I also take pride in the process. Mind you, I also cite Stack Overflow answers in my code if I use it. Usually with a comment like:
// Source: https://stackoverflow.com/q/11828270
With any AI code I use, I adopted this style (at least for now):
// Note: This was generated by Claude 4.5 Sonnet (AI).
// Prompt: Do something real cool.
> I believe in disclosing all attempts or actual uses of generative AI output, because I think it's unethical to deceive people about the process of your work. Not doing so undermines trust, and amounts to disinformation or plagiarism. Disclosure also invites people who have disagreements to engage with the work, which they should be able to. I'm open to feedback, btw.
Thank you for your honesty! Also tremendous project.