I asked the co-chair of the W3C Audio Working Group (he was doing a Q&A in his capacity as some manager at Google's web audio group) about why nothing in an audio graph can be reused. The standard way to do things is to create a new audio graph every time you want to generate a "new sound" (more or less) and trash the old ones. All of the source nodes (the starting points of the graphs), like
AudioBufferSourceNode and
Oscillator can only be started once.
An AudioBufferSourceNode can only be played once; after each call to start(), you have to create a new node if you want to play the same sound again. Fortunately, these nodes are very inexpensive to create, and the actual AudioBuffers can be reused for multiple plays of the sound. Indeed, you can use these nodes in a "fire and forget" manner: create the node, call start() to begin playing the sound, and don't even bother to hold a reference to it. It will automatically be garbage-collected at an appropriate time, which won't be until sometime after the sound has finished playing.
His answer is that, while you construct audio graphs on the main thread, they go to an audio rendering thread once you start it. So, that graph is being used to produce sound, and it's implied that it's not worrying about syncing things back to the graph on the main thread. The graph on the main thread is basically dead? He said "there's no introspection on the audio graph".
What makes this confusing is I know that there is some introspection. For example, you can read an AudioBufferSourceNode's detune value and see it change (if you've set something up to alter it over time).
So…I do not know what's up, but I gotta get back to my job.