Rapid proliferation of large language models presents novel threats to privacy and human agency. However, given the resources, AI models, especially LLMs, can prove not just to be able to solve crucial problems in computing, but also to be unprecedented allies in fighting exploitation and violence now.
It is imperative for not solely developers or researchers to explore the reverse- and re-engineering of language models to investigate, identify, and confront any cruel, unethical, and criminal contexts and consequences of LLM use — all of us are stakeholders in the success of AI development, including artists, workers, programmers, and researchers in many domains.
Just as well, many LLMs will not be allies in this fight. Experts across domains must collaborate to advance and accelerate understanding of LLMs, especially prompting, to facilitate robust, standardized, and privacy-protecting frameworks for investigating problematic use of LLMs.
Here I will continuously document selected experiments using large language models.