The Download: How AI is changing music, and a US city’s AI experiment


While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains.

By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman.

Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI. Read the full story.

—James O’Donnell

This story is from the next edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and get a copy of the magazine when it lands!

A small US city is experimenting with AI to find out what residents want

Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city?

After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what a 25-year plan for their city should include. They could also vote on whether they agreed or disagreed with other ideas.

But some researchers question whether soliciting input in this manner is a reliable way to understand what a community wants. Read the full story.

—James O’Donnell



#Download #changing #music #citys #experiment

Leave a Reply

Your email address will not be published. Required fields are marked *