Cynthia Solomon has shared a treasure trove of rare classic videos of Seymour Papert, Marvin and Margaret Minsky, kids programming Logo and playing with turtles, and many other amazing things at the MIT AI Lab, MIT Media Lab, and Atari Cambridge Research:
Check out Paul Lamere's talk about playlisting that he presented at ISMIR 2010 (The International Society for Music Information Retrieval has conferences about all this stuff, and Paul founded The Echo Nest, which Spotify later bought):
ISMIR: The International Society for Music Information Retrieval
>Tutorial 4: Finding A Path Through The Jukebox -- The Playlist Tutorial. The simple playlist, in its many forms -- from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
A lot of this discussion makes more sense if you know the history of The Echo Nest and their acquisition by Spotify.
The Echo Nest was one of the most interesting music-tech companies ever built: a music intelligence platform spun out of MIT that analyzed audio, metadata, web text, artist similarity, genre structure, and playlist construction. Spotify bought them in 2014 specifically to strengthen music discovery and recommendation. At the time, Spotify said the deal would let it use The Echo Nest's "in depth musical understanding and tools for curation", and even said the Echo Nest API would remain "free and open" for developers.
If you ever used the old Echo Nest APIs, Remix SDK, demos, Music Hack Day projects, or Paul Lamere's experiments, that was a golden era. Echo Nest had open APIs for artist similarity, track analysis, playlisting, "taste profiles", ID mapping across services, and beat/segment-level music analysis. Paul Lamere's whole ecosystem of demos came out of that world: Boil the Frog, Sort Your Music, Organize Your Music, playlistminer, and later Smarter Playlists. His GitHub still points to a lot of that lineage, and his blog is still active. In fact, he posted just this month about rebuilding Smarter Playlists after ten years of use.
The sad part is that the open developer platform mostly did not survive the acquisition. By 2016, developers were being told that the Echo Nest API would stop issuing new keys and then stop serving requests, with migration to Spotify’s API instead. Community discussions at the time also noted that some Echo Nest capabilities, especially things like Rosetta-style cross-service mapping, were not really carried over.
That's also why Spotify's current AI DJ is so frustrating. The problem is that "AI DJ" is not the same thing as a system that deeply understands musical structure, discography semantics, performance history, or classical work/movement hierarchy. It's a recommendation + narration layer, not a true MIR-native musical intelligence system.
If you're interested in the research side of this field, the conference is ISMIR: the International Society for Music Information Retrieval, which is literally dedicated to computational tools for processing, searching, organizing, and accessing music-related data. That community is still very active. The ISMIR site describes MIR exactly in those terms, and the 2010 Utrecht conference included Paul Lamere's tutorial, "Finding A Path Through The Jukebox -- The Playlist Tutorial."
>gffrd on June 26, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...
>Yes! It was "Infinite Jukebox," created by Paul Lamere ... it was awesome because it would analyse a track, then visualize its "components" and you could watch as the new "infinite" track looped back on itself and jumped from point-to-point in the original track to create an everlasting one.
He created some excellent products from the Rdio API, and later Spotify ... and I believe his analysis engine ended up being the foundation upon which Spotify's _play more tracks like these_ capability is based.
>Looks like he's moved over to publish on Substack -- there's a recent(ish) post reflecting on 10 years of Infinite Jukebox:
>However, that wasn't the end of the Infinite Jukebox. An enterprising developer: Izzy Dahanela made her own hack on top of mine. To make it work without using uploaded content, she matches up the Echo Nest / Spotify music analysis with the corresponding song on YouTube. She hosts this at eternalbox.dev. It runs just as well as it ever did, 10 years later.
>DonHopkins on June 28, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...
>I was working on some music retrieval stuff in 2010, so I joined the EchoNest developer program and played around with their web apis that let you upload music and download an analysis that you could use in all kinds of cool ways. They had an SDK with some great demos and example code. I discussed it with Eric Swenson and Paul Lamere, and had the chance to hang out with Paul Lamere and Ben Fields at ISMIR 2010 (the International Society for Music Information Retrieval conference) in Utrecht, where they gave a tutorial about playlisting:
>Tutorial 4: Finding A Path Through The Jukebox -- The Playlist Tutorial. The simple playlist, in its many forms -- from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
>[...]
Some of the most interesting Echo Nest descendants are still around in one form or another. Paul Lamere's current/public projects include Smarter Playlists, and his GitHub still highlights SortYourMusic, OrganizeYourMusic, playlistminer, and BoilTheFrog. Glenn McDonald’s Every Noise at Once is another major descendant of that tradition: an enormous map of music genre space. Glenn's own site still describes it as an "inexorably expanding universe of music-processing experiments", and the public genre pages now explicitly say they're a long-running snapshot based on Spotify data through 2023-11-19. After Spotify's layoffs in 2023, TechCrunch reported that Glenn lost access to the internal data needed to keep Every Noise fully updated, which is why it now feels more archival than alive.
Back in 1998 when I was working on The Sims 1, I proposed in my review of the design document something I called "Moody Music": essentially a soundtrack plus a synchronized semantic/emotional control track that could affect gameplay over time. The idea was that music wouldn't just decorate the simulation; it would change it: influencing mood, motives, relationships, skills, timing, and even triggering events at specific musical moments. I wrote that up in my review of the 1998-08-07 Sims design document, along with the broader idea of letting the game recognize a player's own CDs and fetch associated "moody tracks" from the network.
Don’s review of
The Sims Design Document,
Draft 3 – 8/7/98:
>I have some ideas about how the music could effect the game, that
I will write up more completely later. In a nutshell, the people in
the house could have a cd or record collection to choose from, each
record an object that has the sound (audio wave and/or midi) and a
“moody” track synchronized with the music. Playing the music
also plays the moods into the environment that the people pick up
on. Music can subtly effect how people react to the environment,
objects, and each other. It can effect their motives and even their
skills temporarily. For example, you might be able to clean the
house better and faster if you put on some up tempo bouncy music.
The player should be able to assume the role of disc jockey on the
radio, and play from another larger library of music and
commercials, that effect the peoples moods and buying habits. The
TV of course is another source of mood altering temporal media,
with commercials and shows that should effect different people
differently. But the most important part of this idea is instead of
the game effecting the music that’s played, the music effects how
the game plays! The ultimate way for the user to effect the game
via music, is to insert one of their own CD’s into their real
computer’s CDROM drive, and the game would recognize it, and
start playing it (maybe with a simple cd player interface to select
the song). There could be a database associating the unique ID
number of the CD with a table of contents and “moody” tracks that
tell how the song effects the peoples emotions over time, with
"percussion" events at dramatic moments of the music that can
trigger arbitrary events in the game (like provoking a fight that was
brewing, or triggering an orgasm at just the right place in the
song). We hire monkeys to listen to well known CD’s, and enter
time synchronized tracks with semantic meanings in Max (like
note tracks, and user defined numeric tracks) or some other
timeline editing tool). Put the database up on the web for instant
retrieval, so when somebody sticks in a new CD, it downloads our
“moody” tracks that go with it, and it starts playing and effecting
their game! Streaming emotions over the net! Eventually there
should be an end-user tool so people can record their own
responses to music as moody tracks they can use in our games.
This mechanism could be used in all kinds of games, to varying
degrees of effect. I’m not saying that music should be the only way
to control the game – it’s more like a subtle background effect, but
there certainly could be a scenario where you try to accomplish
some task (like taming a wild beast) by using only your musical
taste and timing. The real bottom line benefit is that you get to
listen to your OWN cd collection of music you want to hear,
instead of being driven crazy by the repetitive music bundled with
the game.
In hindsight it was quite adjacent to MIR, affective computing, adaptive soundtrack systems, and some of the ambitions that Echo Nest represented. That's why I was so excited about The Echo Nest in 2010 when I was working with Will Wright at the Stupid Fun Club on a music spatial organization and navigation system called MediaGraph.
MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club
>This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.
>It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.
>What Apple put inside the Neo is the complete behavioral contract of the Mac.
I remember seeing and using my first powerbook 160 and being blown away that it had the "complete behavior contract of the Mac" that I knew at the time. Even the 16 shades of gray screen made it more luxurious than a classic black-and-white Mac.
And the "What's on Your Powerbook" ads, with Todd Rundgren and Rev. Don Doll, SJ.
>Todd also co-developed graphic tablet software with a music theme for Apple in a technology venture in the late eighties. With Dave Levine, he designed and developed a screensaver product called Flowfazer (see example of one of the screensavers below), with the strapline “Music for the Eye.” They introduced it at MacWorld thinking they would publish it themselves, but found there was already well-funded competition with Berkeley Systems Flying Toasters and were forced to abandon the project.
The character Beauchamp ("BEE-jum") Day in Armistead Maupin’s Tales of the City is a softening of the English aristocratic way ("Bee-chum") of the French spelling Beauchamp ("boh-SHON" as the French would say).
But he's less a British aristocrat than a brittle prep-school martinet in a cheap tie who rants at a secretary over three typos like a duke defending the realm, sneers about Kelly girls and office decor as if guarding the Social Register, treats sleeping with his own employee as proof of authority, and then sneaks off to bathhouses while running his typing pool with equal parts class anxiety, closet panic, and a middle manager's superiority complex.
Sometimes I wonder about the aristocrats who towns and roads in the UK were named after, like Lord Penistone of South Yorkshire, or Lady Sluts Hole of Norfolk.
HyperLook was inspired by HyperCard and implemented for the NeWS window system in PostScript, and supported networking. I used it to implement SimCity for Unix.
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))
HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
Atari Cambridge Research- part 5: David Levitt shows Music Box on his Lisp Machine
https://www.youtube.com/watch?v=ocwsVkqEKys
Atari Cambridge Research- part 6: Music box with Tom Trobaugh and drum machine with Jim Davis.
https://www.youtube.com/watch?v=DhA0FGsin_s
Cynthia Solomon has shared a treasure trove of rare classic videos of Seymour Papert, Marvin and Margaret Minsky, kids programming Logo and playing with turtles, and many other amazing things at the MIT AI Lab, MIT Media Lab, and Atari Cambridge Research:
https://news.ycombinator.com/item?id=28604629
reply