AI Policy
This policy explains my use of “Artificial Intelligence” in my professional work. I use the acronym “AI” to refer to trends in generative AI and large language models.
This is a first draft subject to change.
Last updated: 2025-06-18
TL;DR
I do not currently use AI for professional work.
Summary
I will not knowingly use AI to produce any of my professional output including code, design, and correspondence. I intend to make all reasonable efforts to avoid interaction with AI where possible for the work I am paid to produce.
Due to the pervasiveness and rapid deployment of AI it is impractical to remain unexposed. Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
Rationale
This is a practical policy allowing me to maintain my own professional standards and remain employable in a difficult economy. I have organised my reasons for objecting to AI into four categories:
This is not an exhaustive list. These topics address my primary concerns. This policy is not intended to be an academic essay.
Privacy and Security
Unfettered AI deployment has led to security vulnerabilities that leak private and confidential data. AI apps have exposed personal data of unwitting users. Many AI tools log and report data with no easy opt-out and deletion path. Prompt injection is a common threat with no reliable prevention. “Vibe coding” has led to experts making basic mistakes.
Conclusion:
Use of AI could breach client trust and expectations too easily, regardless of any confidentiality agreement. AI challenges compliance with UK and EU data protection laws.
Morals and Ethics
Copyright
AI is trained on copyrighted work. AI companies have amassed giant datasets without respecting copyright licenses. Attribution of training sources is not given.
Facebook was exposed using pirated books. Disney and Universal are suing for copyright infringement. The New York Times is suing OpenAI and Microsoft. Big media companies that aren’t going to court have struck license agreements.
If AI training is considered “fair use” why are content deals being made? What value is there in copyright at all? Such logic would allow copyrighted work to be laundered through AI effectively washing away any license.
Conclusion:
These unanswered questions mean I cannot use AI in good conscience whilst respecting my own creative work and legal rights.
Misuse and Harm
AI crawlers routinely scrape websites using underhanded tactics like residential proxies. The damage to open source infrastructure is overwhelming both servers and maintainers alike. The number of known AI bots grows monthly. These bots blatantly ignore the long established robots.txt industry standard. AI slop is spammed on social networks resulting in a new threat of misinformation and scams.
AI data centres are draining water and electricity supply. Water, noise, and air pollution from data centres is ruining lives of nearby residents. AI model training has subjected contractors to “psychological trauma” for low pay.
Conclusion:
The AI industry is damaging the web platform I build for. The human and environmental toll in pursuit of profit can’t be ignored. I am not willing to participate.
Quality and Standards
The quality of AI generated code does not meet my professional standards. Nor does it meet the quality expected and required by my clients. AI will “hallucinate” APIs and syntax that does not exist. AI system prompts have been shown to favour specific technology that is not appropriate for my work.
I do not have any reason to use AI for correspondence. I am confident in my own ability to communicate. AI would obscure and likely misinterpret my message. I would prefer to read the original prompt with only the relevant information without AI noise. I extend that courtesy to my recipients.
Generative AI for visual design is effectively a slot machine. Ideation is done through a series of random spins. This eliminates all established principles of good design.
Conclusion:
Delivering design and code generated by AI would be professional negligence.
Employment and Education
AI companies actively target students who are turning to AI plagiarism at an alarming rate. Teachers are encouraged to use AI products. Studies already suggest long-term LLM users suffer cognitive debt.
AI CEOs are openly warning their products will lead to job loss. Companies like Duolingo and Shopify mandate AI-first cultures despite failures like Klarna.
AI tools advertise the lack of professional skills required to use them. Nothing is said about the prerequisite expertises required to adequately evaluate and correct AI output.
Conclusion:
AI hinders new talent from entering the web industry and drives existing knowledge away, hastening the deskilling of web dev. I do not wish to contribute.