You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just a stupid idea since i just stumbled over this issue.
But what if you do each channel seperately?
untested pseudo code:
importnoisereduceasnrimportnumpyasnpdef_interleave(left, right):
"""Given two separate arrays, return a new interleaved array This function is useful for converting separate left/right audio streams into one stereo audio stream. Input arrays and returned array are Numpy arrays. """returnnp.ravel(np.vstack((left, right)), order='F')
audio_chunk=None# your audio datainput_channels=2# for stereo audiosample_rate=16000# your audio sample rateaudio_data_dtype=np.int16# your audio data typeifisinstance(audio_chunk, bytes):
audio_chunk=np.frombuffer(audio_chunk, dtype=audio_data_dtype)
# reshaped to (-1, input_channels)audio_data=audio_chunk.reshape(-1, input_channels)
# Process the channelsclean_channels= []
foriinrange(input_channels):
channel_data=audio_data[:, i]
# reduce noise on channelchannel_data=nr.reduce_noise(y=channel_data, sr=sample_rate)
clean_channels.append(channel_data)
# Combine the channels againfinal_audio_data=_interleave(*clean_channels)
# Convert back to bytesnp.asarray(final_audio_data, dtype=audio_data_dtype).tobytes()
Several people have asked for noisereduce to work for multiple channels.
The text was updated successfully, but these errors were encountered: