Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

plotter-4.scd not working on Arch Linux #135

Open
tedmoore opened this issue Aug 11, 2022 · 6 comments
Open

plotter-4.scd not working on Arch Linux #135

tedmoore opened this issue Aug 11, 2022 · 6 comments

Comments

@tedmoore
Copy link
Member

tedmoore commented Aug 11, 2022

This appeared over on the SuperCollider forum.

https://scsynth.org/t/tutorial-coding-a-2d-corpus-explorer/6357/2

It seems that this file (see below) is getting stuck on Arch Linux.

// define this big function, then way down below, execute it
(
~twoD_instrument = {
	arg folder, sliceThresh = 0.05;
	fork{
		var loader = FluidLoadFolder(folder).play(s,{"done".postln;});
		var src, play_slice, analyses, normed, tree;
		var indices = Buffer(s);

		s.sync;

		if(loader.buffer.numChannels > 1){
			src = Buffer(s);
			FluidBufCompose.processBlocking(s,loader.buffer,startChan:0,numChans:1,destination:src,destStartChan:0,gain:-6.dbamp);
			FluidBufCompose.processBlocking(s,loader.buffer,startChan:1,numChans:1,destination:src,destStartChan:0,gain:-6.dbamp,destGain:1);
		}{
			src = loader.buffer
		};

		FluidBufOnsetSlice.processBlocking(s,src,metric:9,threshold:sliceThresh,indices:indices,action:{
			"done".postln;
			"average seconds per slice: %".format(src.duration / indices.numFrames).postln;
		});


		play_slice = {
			arg index;
			{
				var startsamp = Index.kr(indices,index);
				var stopsamp = Index.kr(indices,index+1);
				var phs = Phasor.ar(0,BufRateScale.ir(src),startsamp,stopsamp);
				var sig = BufRd.ar(1,src,phs);
				var dursecs = (stopsamp - startsamp) / BufSampleRate.ir(src);
				var env;

				dursecs = min(dursecs,1);

				env = EnvGen.kr(Env([0,1,1,0],[0.03,dursecs-0.06,0.03]),doneAction:2);
				sig.dup * env;
			}.play;
		};

		// analysis
		analyses = FluidDataSet(s);
		indices.loadToFloatArray(action:{
			arg fa;
			fork{
				var spec = Buffer(s);
				var stats = Buffer(s);
				var stats2 = Buffer(s);
				var loudness = Buffer(s);
				var point = Buffer(s);

				fa.doAdjacentPairs{
					arg start, end, i;
					var num = end - start;

					FluidBufSpectralShape.processBlocking(s,src,start,num,features:spec,select:[\centroid]);
					FluidBufStats.processBlocking(s,spec,stats:stats,select:[\mean]);

					FluidBufLoudness.processBlocking(s,src,start,num,features:loudness,select:[\loudness]);
					FluidBufStats.processBlocking(s,loudness,stats:stats2,select:[\mean]);

					FluidBufCompose.processBlocking(s,stats,destination:point,destStartFrame:0);
					FluidBufCompose.processBlocking(s,stats2,destination:point,destStartFrame:1);

					analyses.addPoint(i,point);

					"slice % / %".format(i,fa.size).postln;

					if((i%100) == 99){s.sync};
				};

				s.sync;

				analyses.print;
				normed = FluidDataSet(s);
				FluidNormalize(s).fitTransform(analyses,normed);

				normed.print;

				tree = FluidKDTree(s);
				tree.fit(normed);

				// plot
				normed.dump({
					arg dict;
					var point = Buffer.alloc(s,2);
					var previous = nil;
					dict.postln;
					defer{
						FluidPlotter(dict:dict,mouseMoveAction:{
							arg view, x, y;
							[x,y].postln;
							point.setn(0,[x,y]);
							tree.kNearest(point,1,{
								arg nearest;
								if(nearest != previous){
									nearest.postln;
									view.highlight_(nearest);
									play_slice.(nearest.asInteger);
									previous = nearest;
								}
							});
						});
					}
				});

			}
		});
	}
};
)

~twoD_instrument.(FluidFilesPath());
@suspiria
Copy link

suspiria commented Aug 11, 2022

Thanks for filing the issue, I'm the OP of that thread. Some more info:

Platform: Latest x86_64 Arch Linux
SC version: 3.12.2 (Built from source using modified ABS PKGBUILD with DNATIVE = ON, DSC_ABLETON_LINK=OFF, DCMAKE_BUILD_TYPE=Release)
FluCoMa version: 1.0.2+sha.2ca6e58.core.sha.804a3b39

Here's an isolated version of the code I'm trying to run (taken from the 2D Corpus Explorer tutorial, part 5 - plotter-5-starter.scd as linked in the description).

// the folder containing the corpus
~folder = FluidFilesPath();

// load into a buffer
~loader = FluidLoadFolder(~folder).play(s,{"done loading folder".postln;});

// sum to mono (if not mono)
(
if(~loader.buffer.numChannels > 1){
	~src = Buffer(s);
	~loader.buffer.numChannels.do{
		arg chan_i;
		FluidBufCompose.processBlocking(s,
			~loader.buffer,
			startChan:chan_i,
			numChans:1,
			gain:~loader.buffer.numChannels.reciprocal,
			destination:~src,
			destGain:1,
			action:{"copied channel: %".format(chan_i).postln}
		);
	};
}{
	"loader buffer is already mono".postln;
	~src = ~loader.buffer;
};
)

// slice the buffer in non real-time
(
~indices = Buffer(s);
FluidBufOnsetSlice.processBlocking(s,~src,metric:9,threshold:0.05,indices:~indices,action:{
	"found % slice points".format(~indices.numFrames).postln;
	"average duration per slice: %".format(~src.duration / (~indices.numFrames+1)).postln;
});
)

// analysis
(
~analyses = FluidDataSet(s);
~indices.loadToFloatArray(action:{
	arg fa;
	var spec = Buffer(s);
	var stats = Buffer(s);
	var stats2 = Buffer(s);
	var loudness = Buffer(s);
	var point = Buffer(s);

	fa.doAdjacentPairs{
		arg start, end, i;
		var num = end - start;

		FluidBufSpectralShape.processBlocking(s,~src,start,num,features:spec,select:[\centroid]);
		FluidBufStats.processBlocking(s,spec,stats:stats,select:[\mean]);

		FluidBufLoudness.processBlocking(s,~src,start,num,features:loudness,select:[\loudness]);
		FluidBufStats.processBlocking(s,loudness,stats:stats2,select:[\mean]);

		FluidBufCompose.processBlocking(s,stats,destination:point,destStartFrame:0);
		FluidBufCompose.processBlocking(s,stats2,destination:point,destStartFrame:1);

		~analyses.addPoint(i,point);

		"analyzing slice % / %".format(i+1,fa.size-1).postln;

		if((i%100) == 99){s.sync;}
	};

	s.sync;

	~analyses.print;
});
)

When evaluating everything in order, everything works up until the "analysis" part. When running this last region, I get the following output:

-> Buffer(2, 1474, 1, 48000.0, nil)
analyzing slice 1 / 1473
analyzing slice 2 / 1473
analyzing slice 3 / 1473
...
analyzing slice 98 / 1473
analyzing slice 99 / 1473
analyzing slice 100 / 1473

At this point, the analysis gets stuck without producing any errors. Occasionally, upon rebooting the interpreter/server and trying again, it reaches slice 200 before stopping.

Changing if((i%100)==99){s.sync;} to s.sync; makes the analysis run smoothly without any errors, albeit very very slowly compared to the original version due to the constant syncing. I tried messing around with how often the sync happens, and once every 14-15 iterations seems to be the point where the analysis starts breaking, but it's inconsistent.

var every = 14; // analysis stops working when (every >= 15)
...
if((i%every) == (every-1)) { s.sync };

@elgiano
Copy link
Contributor

elgiano commented Aug 17, 2022

I'm also on arch and I see the same problem: something is wrong in the sync mechanism, so the analysis gets stuck.
Important note: with server.options.protocol = \tcp I can run it many times in a row without problems. So it looks like some messages get lost over UDP?

In my practice with SuperCollider I noticed the same problem when loading a large number of buffers in parallel. Apparently some b_query replies get lost, and so s.sync stops working (over UDP, not over TCP). Since every processBlocking issues a b_query, I suspect that it's the same problem.

My workaround for these situations is not to rely on sync for completion, but instead on FluCoMa's own \done mechanism. It works better in my experience, and it is fast. It looks like this:

  • run a pool of parallel processes (using Semaphore), one for each slice
  • for each slice, copy the buffer portion to a new temporary buffer (see Parallel processing: are slices copied across threads correctly? #138)
  • run the analysis using .process, not .processBlocking (otherwise the Semaphore mechanism is useless)
  • when analysis for this slice is completed (last callback fired): insert in dataset, free temporary slice buffer

However, using process requires a cleaner callback handling, otherwise code gets way too nested. I made a few functions for this purpose: FluidHelper.await and FluidHelper.bufProcessChain.
I apologize if such code is not so clear, it's still work in progress. But the main idea is:

  • .await: cleaner callback mechanism using CondVar. Run an async function, wait for it, return results.
  • .bufProcessChain: use await to run a chain of async buffer processes. Handles temporary data buffers under the hood.

There is also FluidHelper.analSlices, which illustrate the Semaphore process.


Another solution which I found working is to make a bundle for, say, 100 slices, and sync that:

var slices = Array.newFrom(fa).slide(2).clump(2);
slices.clump(100).do { |sliceClump|
    var bundle = s.makeBundle(false) {
        sliceClump.do { |slice| analFunc.(slice) };
    };
    s.sync(bundles: bundle)
}

If bundles are too big, it shows a very scary buffer overflow error. However, the error is harmless because SC handles it automatically by splitting the big bundle in smaller ones.

@weefuzzy
Copy link
Member

Thanks everyone. I have an Arch VM so I'll try and reproduce / diagnose when I can, but UDP packet loss does seem like a possible cause. I general having to rely on robust client-server conversations to the extent that we do for batch buffer processing makes me sad, but I've not yet hit on an alternative.

@elgiano thanks for the encapsulations of Nice Things. I'll have a look...

@elgiano
Copy link
Contributor

elgiano commented Sep 13, 2022

News: I confirm that there's a problem with UDP: sclang drops some messages if they come too fast/too many.
I opened an issue at supercollider/supercollider#5870

@tedmoore
Copy link
Member Author

Hmm. Interesting. Thanks @elgiano for your investigating and reporting!

@weefuzzy
Copy link
Member

@elgiano I just had a quick skim of the discussion on that SC issue. Those responses pointing out that this is an inherent feature of UDP are, unfortunately, right: packet loss is just a risk under heavy traffic.

Dealing with this robustly is an interesting problem for us. Clearly we can't just force people to use TCP, yet we have quite a few points where we'd like robust communication between client and server, especially when doing a whole queue of buffer processes. One possibility might be devising some sort of timeout / back-off scheme language side, so that jobs don't simply stall waiting for replies that may never arrive. Even better (but much more work) would be a way of specifying a whole pipeline of work to the server (which would reduce network traffic, and synchronisation overhead).

I'll have a think, but there's definitely a fundamental brittleness here for SC batch buffer processing that I'd like to be able to address in the medium-term.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants