Bridging the gap
🎸 While many musicians think generative songwriting is garbage—the inimitable Nick Cave replied that generative lyrics written in his style represented “a grotesque mockery of what it is to be human”—other artists are embracing the possibilities of a future filled with AI-enhanced music.
During LA Tech Week, Black Eyed Peas cofounder Will.i.am advised young musicians to start working with AI tools.
“I think what’s going to happen real soon is that instead of going to the studio and making a song to release on Spotify, or Apple Music or [other] streaming platforms, artists are going to go into the studio to train their model. And it’s their model that they’re going to train, because they’re training it to their fingerprint to their creative thumbprint.”
🎤 Will has a vested interest: his FYI (Focus Your Ideas) platform uses AI to inspire artists that feed voice and video calls into it.
Beyond his own app, Will is finding inspiration working with a music LLM created by Google.
“I heard a song that I’d written that I’d never wrote, a song I’d never produced that I produced. That was so freakin’ fierce. Every sound was crisp. Every synth was like, Yeah, that's the sound. The bass was like, Yo, that's a right bass. Drums are punchy. The lyrics was like Yo, I would have [written] that myself.”
Solid advice from a man who’s sold over 80 million records.
Up in smoke
🔥 Canadian wildfires recently made New York City’s air quality the worst in the world. As residents were scrambling to check their region’s Air Quality Index (AQI), Google wasn’t helping matters.
Google’s AI-powered search spit back wildly inaccurate info: one search for Brooklyn’s AQI shot back 30, about as good as can be, when in reality it was 324—well into the hazard zone.
“Even a search for ‘current Brooklyn AQI’ didn't get us any closer to the truth. With that query, we got an even lower figure: a healthy 23. Similar queries for Philadelphia and Charlotte's indexes provided similarly low and false figures.”
While Google did provide a link to the US government site, AirNow, how many people are going to take that extra step?
Credit to Google for providing AI disclaimers, but also: read the room. You know better than anyone how your users function online. As Futurism concludes,
“AI that provides incorrect information during a crisis, until reporters asking questions prompts human employees to manually correct the AI's errors, isn't a very inspiring system. If the tech has any future, it will need to evolve beyond that reality soon.”
Blockbustered
🧠 Reading minds based on brain waves has long been a dream of science fiction authors. In recent years, recognizing emotions based on biofeedback started to show promise. Researchers even floated the possibility of pulling text from synapses firing.
Now, researchers have produced high-quality video from publicly available fMRI readings with the help of an AM model called MinD-video.
While not a one-to-one rendering, the results are pretty impressive:
The researchers believe this tech might also help us better understand the brain, specifically the visual cortex.
The tech is still in its infancy, though the possibilities are incredible—provided we stay ahead of data privacy issues, as there’s nothing more invasive than rendering private thoughts in 8k.
AI Tool of the Week
😼 As new AI users grapple with the reality of prompting, the open-source Prompter allows you to generate and share prompts for any generative visual platform.
The tool’s creator shared a few of their favorite prompts on Twitter.
Here’s one I played with in Midjourney: A film still of feral cats, chasing salmons in a stream, betamax, cool color grading, cgi, set in 1912