Home / Uncategorized / Conserve the Sound is an archive of noises from old tape players, projectors and other dying tech

Conserve the Sound is an archive of noises from old tape players, projectors and other dying tech

All of us grew up around tech different from what we have today, and many of us look back on those devices with fondness. But can you recall the exact sound your first Casio keyboard made, or the cadence of a rotary phone’s clicks? Conserve the Sound aims to, well, conserve the sound of gadgets like these so that future generations will know what it sounded like to put a cartridge in the NES.

It’s actually quite an old project at this point, having been funded first in 2013, but its collection has grown to a considerable size. The money came from German art institution Film & Medienstiftung NRW; the site was created (and is maintained) by creative house Chunderksen.

The whole thing is suitably minimal, much like an actual museum: You find objects either by browsing randomly or by finding a corresponding tag, and are presented with some straightforward imagery and a player loaded with the carefully captured sound of the device being operated.

Though the items themselves are banal, listening to these sounds of a bygone age is strangely addictive. They trigger memories or curiosity — was my Nintendo that squeaky? Didn’t my rotary phone click more? What kind was it anyway? I wonder if they have my old boombox… oh! A View-Master!

The collection has grown over the years and continues to grow; it now includes interviews with experts in various subjects on the importance of saving these sounds. You can even submit your own, if you like. “We welcome suggestions in general, sound suggestions, stories, anecdotes and of course collaborations,” write the creators.

I for one would love to revisit all the different modems and sounds I grew up using: 2400, 9600, 14.4, 28.8, all the way up to 56.6. Not exactly pleasant noises, admittedly, but I anticipate they will bring back a flood of memories, Proust-style, of BBSes, hours-long download times and pirated screen savers.

Check Also

Where Facebook AI research moves next

Five years is an awful lot of time in the tech industry. Darling startups find ways to crash and burn. Trends that seem unstoppable putter-out. In the field of artificial intelligence, the past five years have been nothing short of transformative. Facebook’s AI Research lab (FAIR) turns five years old this month, and just as the social media giant has left an indelible mark on the broader culture — for better or worse — the work coming out of FAIR has seen some major impact in the AI research community and entrenched itself in the way Facebook operates. “You wouldn’t be able to run Facebook without deep learning,” Facebook Chief AI Scientist Yann LeCun tells TechCrunch. “It’s very, very deep in every aspect of the operation.” Reflecting on the formation of his team, LeCun recalls his central task in initially creating the research group was “inventing what it meant to do research at Facebook.” “Facebook didn’t have any research lab before FAIR, it was the first one, until then the company was very much focused on short-term engineering projects with 6 month deadlines if not less,” he says. LeCun Five years after its formation, FAIR’s influence permeates the company. The group has labs in in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh and London. They’ve partnered with academic institutions and published countless papers and studies, many of which the group has enumerated in this handy 5-year anniversary timeline here. “I said ‘No’ to creating a research lab for my first five years at Facebook,” CTO Mike Schroepfer wrote in a Facebook post. “In 2013, it became clear AI would be critical to the long-term future of Facebook. So we had to figure this out.” The research group’s genesis came shortly after LeCun stopped by Mark Zuckerberg’s house for dinner. “I told [Zuckerberg] how research labs should be organized, particularly the idea of practicing open research.” LeCun said. “What I heard from him, I liked a lot, because he said openness is really in the DNA of the company.” FAIR has the benefit of longer timelines that allow it to be more focused in maintaining its ethos. There is no “War Room” in the AI labs, and much of the group’s most substantial research ends up as published work that benefits the broader AI community. Nevertheless, in many ways, AI is very much an arms race for Silicon Valley tech companies. The separation between FAIR and Facebook’s Applied Machine Learning (AML) team, which focuses more on imminent product needs, gives the group “huge, huge amount of leeway to really think about the long-term,” LeCun says. I chatted with LeCun about some of these long-term visions for the company, which evolved into him spitballing about what he’s working on now and where he’d like to see improvements. “First, there’s going to be considerable progress in things that we already have quite a good handle on…” A big trend for LeCun seems to be FAIR doubling down on work that impacts how people can more seamlessly interact with data systems and get meaningful feedback. “We’ve had this project that is a question-and-answer system that basically can answer any question if the information is somewhere in Wikipedia. It’s not yet able to answer really complicated questions that require extracting information from multiple Wikipedia articles and cross-referencing them,” LeCun says. “There’s probably some progress there that will make the next generation of virtual assistants and data systems considerably less frustrating to talk to.” Some of the biggest strides in machine learning over the past five years have taken place in the vision space, where machines are able to parse out what’s happening in an image frame. LeCun predicts greater contextual understanding is on its way. “You’re going to see systems that can not just recognize the main object in an image but basically will outline every object and give you a textual description of what’s happening in the image, kind of a different, more abstract understanding of what’s happening.” FAIR has found itself tackling disparate and fundamental problems that have wide impact on how the rest of the company functions, but a lot of these points of progress sit deeper in the five year timeline. FAIR has already made some progress in unsupervised learning, the company has published work on how they are utilizing some of these techniques to translate in between languages they lack sufficient training data for so that, in practical terms, users needing translations from something like Icelandic to Swahili aren’t left in the cold. As FAIR looks to its next five years, LeCun contends that there are some much bigger challenges looming on the horizon that the AI community is just beginning to grapple with. “Those are all relatively predictable improvements,” he says. “The big prize we are really after is this idea of self-supervised learning — getting machines to learn more like humans and animals and requiring that they have some sort of common sense.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.