Life of an Author
Many writers fear AI. Some embrace it. Like any new technology, how you relate to it matters.
March of the Machines. Canadian novelist Stephen Marche recently published Death of an Author under the AI-generated pseudonym, Aidan Marchine. The novel was “95% AI-generated.” As Wired says of this “collaboration,”
“Not only is it an exciting read, it’s clearly the product of an astute author and a machine with the equivalent of a million PhDs in genre fiction. ChatGPT read basically the entire internet and all of literature, finding billions of parameters that go into “good” writing.”
Combining ChatGPT with Sudowrite and Cohere, Marche refined his writing—or in his estimation, curation—process over the course of months.
Marche shares his insights in The Atlantic, where he tells readers that if you want to write like an author, don’t prompt the author’s name or else you risk producing derivative text. Instead, prompt what that author provokes in your mind.
As to the question of AI’s danger to writers, he reminds us that, at the end of the day, it’s just another tool in the creative toolkit. An extremely effective and wide-ranging one, but a tool nonetheless:
“If you take a hammer and hit yourself over the head with it, the hammer did not give you a headache. If you make bad art with a new tool, you just haven’t figured out how to use the tool yet. Also, tools are just tools: Everyone has access to a thesaurus; some people have richer vocabularies than others nevertheless. Linguistic AI is no messiah, and it is no anti-Christ. It is a fundamentally mysterious tool whose confounding inabilities will be as surprising as its wondrous capabilities.”
Playing Fair
Copyright issues are top of mind when it comes to AI. A big question has emerged regarding creative endeavors: do copyright holders need to give permission for their work to be used to train AI models?
Fair precedent. While this complex and nuanced argument will likely be made in courtrooms, the Fair Use precedent is a heuristic many companies employ: there’s no meaningful difference between AI tools and other computational uses that have already received court blessings over the last few decades.
Over at Freethink, public good is the ultimate decider in copyright issues.
“The answer is made clear, at least in the United States, where Article I, Section 8, clause 8 of the US Constitution specifies the purpose of copyright: ‘to promote the progress of Science and the useful Arts.’ Granting copyrights ‘for limited times’ (a term of 14 years at the time that clause was written) is a means of promoting the public good.”
Of course, surviving as an artist these days is no easy task. As legitimate concerns about the future of many vocations come into view, the ability to protect your work against competition—especially competition as daunting as “everything ever on the internet quicker and faster than any human could ever process”—will be essential.
Yet the creation of something new and non-infringing from protected works is also guided by precedent. Sony and Sega have had their day in court trying to protect copyrighted works. But as the results fell into the public good domain, the question of protection was a non-starter.
“Yes, these cases say, there is literal copying involved in this process, but the end result (and the only thing offered to the public in competition with the works that were copied ‘behind the curtain’) is something new and non-infringing — exactly the kind of creativity that copyright is meant to promote, not discourage.”
It can be hard for creatives to wrap their head around their work being used for the public good and not personal monetary gain, especially in today’s hyper-competitive world.
But in the eyes of the law, the public domain persists.
Taking a moonshot
A potential breakthrough in pancreatic cancer screenings has been discovered, as recently reported in Nature: AI algorithms were found to predict this aggressive cancer three years ahead of current medical diagnostic tests.
Why it matters. Pancreatic cancer is one of the deadliest forms of cancer, with a 5-year survival rate of just 12%. By training AI algorithms on millions of records from the Danish National Patient Registry and the US Veterans Affairs Corporate Data Warehouse, the team orrelated particular diagnosis codes with pancreatic cancer.
While exciting, the study’s authors cautioned that the algorithms cannot identify mechanisms or causative events.
“Like often in science, correlation is useful for prediction, but causation is much harder to establish,” says co-senior investigator Chris Sander.
Still, researchers are hopeful. According to Sander, once these algorithms are established, the costs are moderate, which could be great news in a notoriously expensive healthcare system.
“The training is what consumes considerable computing resources. The actual clinical tests to see early signs of cancer or to detect cancer when it is still very small are costly, much more expensive than for example mammograms.”
AI Constitution?
The question of policy on AI is an essential conversation right now. While the Biden administration has started considering guardrails, Open Philanthropy recently got ahead of the issue by publishing “12 tentative ideas for US AI policy.”
Robot Declaration. The author, Luke Muelhauser, believes the US over-regulates—a contentious statement given the lack of regulation, especially when it comes to antitrust laws and climate change policies. Given the rapid expansion of the AI industry, however, he feels some caution is warranted.
His 12 policy options are below, though you can click the link above to read his detailed explanation of each.
Software export controls
Require hardware security features on cutting-edge chips
Track stocks and flows of cutting-edge chips, and license big clusters
Track and require a license to develop frontier AI models
Information security requirements
Testing and evaluation requirements
Fund specific genres of alignment, interpretability, and model evaluation R&D
Fund defensive information security R&D
Create a narrow antitrust safe harbor for AI safety & security collaboration
Require certain kinds of AI incident reporting
Clarify the liability of AI developers for concrete AI harms
Create means for rapid shutdown of large compute clusters and training runs
AI Tool of the Week
Thumbnail Maker is a boon for YouTubers.
Selecting video thumbnails is one of the more laborious projects, especially for non-designers. Yet it’s a critical component for clickthrough rates.
Thumbnail Maker lets you create a thumbnail in one click, improving image quality, colors, and lighting in the process. The home page demo is indicative of what’s possible with AI, and it’s rather impressive.