This was kind of accidental VCV patch in combination with a twelve string guitar piece on archive.org.
I’d been experimenting with variations on the Quad Drum Destroyer, combined with the Confusing Simpler from NYSTHI, and this just hit a nerve. There is something about filtered delays that is addictive. That and the original sample just seems to give you so much opportunity for basically endless variations that changes enough, but not too much.
Where you can go from here
Replace or mix the Confusing Sampler with live input, and jam along with the mangled sample into the same effect chain.
Of course, try with different sound samples to see how it turns out on other material.
Modulate the octave (see below for the sample using the Ethiopian song loop that does just that).
The following piece illustrates what it can do with other samples. I replaced the 12 string guitar with an extended sample from an Ethiopian song . This version of the patch also modulates the Octave on Confusing sampler. This works best with whole numbers (-2, -1, 0, 1, 2, 3, 4, 5) which coinicidentally is what a VCV Scalar module outputs if you set the Octave mode to ‘Shared’ (i.e. all octaves quantized the same), and turn off every note but the first.
This is a demonstration of the utility of parallel repetition of the same basic signal chain. I like to think that it mirrors the musical idea of harmonic relatedness and modulation. Instead of affecting pitch, this patch affects time, in a rhythmically interesting way.
This patch uses controllers (buttons), and modulation sources to crossfade between a dry signal — in this case a drum machine with some built in random variations — with the same signal delayed and filtered.
TOP ROW: Drum Machine
This uses a VCV Pulse Matrix to drive an instance of a Vult Trummor 2 (for kick and snare and a Hora Treasure Hihat. Each sound uses 2 rows of the Pulse Matrix — one set to play forward, and one set to play in random order. The two rows are then combined using a NYSTHI Logic Module’s OR function. The random triggers are fed through Audible Instruments Bernoulli Gates to thin out the hits that get dropped into the pattern. You can turn up the balance knob on the Bernoulli Gates to get more randomness in your pattern. In the saved patch, this is tuned to my liking.
DELAY ROWS: Wonky Modulation
These are all essentially the same. Going from right to left there’s a Submarine XF-201 Crossfader, that takes the signal from the row above — in the first case, the output of the drum machine mixer, and a delayed, effected signal.
There’s an AS DelayPlus Delay followed by an XFX F-35 Filter which is the ‘wet’ side of the crossfader. The delay times are set with voltages from the AS BPM Delay/HZ Calc module to musically useful values.
This is a bit tricky, and required some fiddling to get mostly right. There’s an RJ Modules Button you can hit which will flip between the dry and effected signals. The manual control is combined (via an NYSTHI Logic module) with a clocked random gate from a Matthew Friedrichs March Hare module, fed through another Bernoulli Gate to thin out the gates somewhat. The March Hare’s Synced Random source is cool because the random gate signal is triggered on beat based on the clock input.
The output of the Logic ‘OR’ gate triggers an AS ADSR Envelope, which then controls the crossfader module. The beauty of this arrangement (with the clocked random gate) is that A) the Bernoulli Gate gives you control over how much random triggering takes place and B) the envelope smooths out the crossfade, much like slew limiter (or a Befaco Rampage with rise/fall controls). In particular the release phase gives a nice effect where it mixes back from the delayed signal to the dry signal.
To work properly — i.e. go from dry to wet 100% when the envelope is triggered — you need to right click/ctrl click on the NYSTHI Logic module and select 0-10V operation. It defaults to 0-5V signals, which will only turn the crossfader to 50%.
PLAYING THE PATCH
Push the buttons on the left side of the patch in order to manually bring the delayed/filtered signal in.
You can add some automatic triggering of the crossfade envelope by tweaking the balance on the Bernoulli Gates. Fully clockwise (i.e. no gates pass through, complete manual control) to any amount counterclockwise. If you go close to full clockwise, you’ll get more delayed signal than dry most of the time, and the patch begins to sound like a demented robot version of Max Roach, continuously varying the pattern.
And since the delay/filter rows are daisy chained, you can have one or more of the wet signals coming through, and each row affects the output of the row above it. I think it gives a really really liquid-sounding mixing of ghost hits and repeats. It takes on a life of it’s own and only rarely sounds awkward or out of time.
WHERE TO GO FROM HERE
I can think of several things you can do with the patch to get even wonkier.
Use different left and right delay times on the delays. I gave up on this because it gets really hectic.
Use another crossfader to mix the last row back into the first row’s delay along with the dry signal from the drum machine. This can go non-linear and overloaded with only a bit of feedback, so I’d use it sparingly, and put a NYSTHI 4DCB in front of the wet signal, because this kind of feedback through a long signal chain can destroy your signal with DC offset.
Use effects besides filters. Filters are the most natural thing to use. One thing that will sound jarring is crossfading between the dry signal and an effect that adds stereo separation (like a stereo Chorus or Flanger).
Scale the envelope output into the crossfader, so that you don’t go all the way to 100% wet signal.
Get rid of the drum machine, use VCV Bridge for audio input and output, and load VCVRack + this patch as a send effect in a DAW.
Have fun, and let me know if you have any questions!
This is a more complicated patch than with my previous tutorials, but I think it uses some techniques that might inspire VCVRack users in other contexts. Note that I use many paid modules, that you’ll have to have bought to load the patch intact; but there are free modules to substitute for the paid ones. One of the reasons why I think tutorials/patch descriptions like this can be valuable is that they describe techniques that can be applied with many different modules. I could have done this patch entirely with free modules, but it would be slightly more complicated and harder to explain.
USING RAMPAGE FOR CYCLING ENVELOPES
The core of the patch is 2 instances of the BEFACO RAMPAGE. each of which can produce two separate envelopes. It’s based on a Eurorack hardware module, and in both it’s real and virtual incarnations, it can be many things: An envelope generator, a slew limiter, a comparator, and things I don’t even know about yet, like what the BALANCE knob is for.
For my purpose in this patch, I’m using it as a cycling envelope generator. That means that instead of firing a single time, it will repeat every time it completes a full cycle. The Rampages control the volume of each oscillator signal (via the Audible Instruments quad VCA), but it also triggers the sample & hold modules that determine the pitch of the oscillators.
This is a pretty standard arrangement for my generative patches. A ML Modules Sample&Hold signal generates a random pitch voltage, which is quantized by a VCV Scalar Module. The pitches are then passed through Fundamental Octave modules to transpose the generated pitches.
The ‘trick’ of this patch is that the EOC (end of cycle) of each Rampage envelope triggers the Sample&Hold that generates pitch. That means the pitch of each note only changes when that oscillator voice has zero amplitude.
The result of this arrangement – random, quantized notes triggered at the EOC – is that the pitch changes only when a voice is silent.
THAT’S (ALMOST) ALL
This patch generates ‘edgeless’ tones — the slow attack and decay of each oscillator voice means there are never jarring changes in pitch or volume. The overall volume of the patch varies widely, as different voices reach minimum and maximum volume, overlapping in time and occasionally getting loud or quiet.
There are ways to trigger pitch edges; turning notes on and off in the Scalar module, or choosing different octave transpostions in the Octave module with trigger pitch changes. But the natural state of this patch is meant to generate edgelessly morphing audio.
There’s some complicated business in the upper right corner of the patch that’s necessary to get the patch running in the first place. The Rampage modules are set to cycle, but they wont begin cycling without an initial trigger. The RJRModules [LIVE] Button in the upper left hand corner will trigger each Rampage envelope to get things going.
The Button is also fed through a NYSTHI Logicmodule, where it’s trigger is logically or’ed with the EOC signal from the RAMPAGE envelopes. The resulting triggers go two ways: the pitch sample&hold are triggered, and the envelopes are triggered.
There’s a row of four AS DelayPlus FX that are fed by output of each voice, and then into the mixer. They’re set to random, long delay times – hand random, meaning I tweaked them to different values – and the combination of the delay time and feedback doubles each synth voice, delayed in time.
The organic ebb and flow of the sound of this generative patch is enhanced by the delays. You can mute them to hear the patch without the delays, and it sounds basically the same, but not as wide and layered.
There are also some UnfilteredAudio Indentwave shapers, one per oscillator, that distort the sine waves using the ‘Harsh Fold’ algorithm. ‘Harsh Fold’ isn’t actually that harsh, at least when you use moderate gain values. When you morph between pure sign and the folded signal, it makes a complex signal with sonic characteristics combining saw wave and sine sounds.
There’s also an AS Reverb Stereo FX on effect send A of the VCV Console and the send levels of each oscillator voice are controlled by the RAMPAGE envelopes, but the send level is controlled by a different envelope than the one for the voice’s volume; in other words, a particular voice’s reverb send level follows the level of a different voice.
RANDOM MODULATION ALL OVER THE DAMN PLACE
There are 3 groups of four Matthew Friedrichs Hot Bunny modules that are set up to do random modulation on a slow time scales. Since I like a bit more random in my random, the smooth output of each Hot Bunny in a group of 4 modulates the rate of its neighbor slightly, in a daisy chain. It’s worthwhile to look at the outputs in a scope module to see how wonky the random signals get.
At any rate there are 3 things being modulated by the Hot Bunnies.
The rise time of each Rampage envelope.
The fall time of each Rampage envelope.
The gain level for each Indent waveshaper.
Since they all move relatively slowly, the modulations deepen the drifty ‘never the same river twice’ nature of the generated music, without making the results edgier.
There are several things you can tweak to change the output and get different sounds out of this without repatching anything.
Change the notes in Scalar – ctrl-click int he note boxes to turn scale steps on and off.
Change the scale in Scalar – click on the NOTES value and try other equal tempered scales, or load a new SCALA file for other scales.
Increase the modulation on the Indent waveshapers, by tweaking the AS AtNuVrTr ATTN and OFFSET modules to the right of the Indent modules
Tweak the modulation on the RAMPAGE modules with the quad VCA modules to their right.
Change the rise and fall settings for the RAMPAGE envelopes. You can also change the range switches to modify the overall timescale of the envelopes as well, though if you use faster envelopes it can get hectic.
Change the scaling on the random values sent into the SCALAR to get a wider range of note values. If you turn up the levels all the way, you’ll get some high, piercing notes, which I used the quad VCA levels to smooth out.
There’s generally a scaler of some sort between each modulator signal and the parameter it’s modulating. This is almost mandatory for modules without controls for the mod amounts. They give you finer grained control over how the sound changes. If you download the patch at the link given above you will have a snapshot of how I hand-tuned each of the modulation events.
There’s a whole world of generative patches you can create, but there are important questions you need to ask yourself: How random is too random? How fast is too fast or slow? What pitch range and scale gives the result the feeling you want?
That’s the challenge of making generative music interesting. Purely random (or deterministically chaotic) sounds sound random and arbitrary. Your goal is to come up with something that reflects human intention. That’s true if you’re playing a traditional instrument or creating a generative instrument and letting it do its thing.
The core of this patch is using waveshapers to generate harmonically rich distortions of the original sine wave. Since the different waveshapers get mixed, and because they’re all processing a signal of exactly the same frequency, they interfere and reinforce each other. The sound changes restlessly and chaotically over the course of the recording, and you occasionally get ghost notes made when more than one overtone series collides.
The audio signal flows from left to right basically, feeding 4 waveshapers that get mixed and modulated by the keyframe mixer. This is a really good beginner’s patch.
I’ll describe the patch left to right. I liked that it fits mostly in one row.
LogInstruments Precise DC Gen
The DC Gen is used to choose a constant note to send to the fundamental oscillator.
Vult Caudal Mechanical Chaos Source x 2
This module is based on modelling a triple pendulum. Each output represents an arm in the pendulum’s position and velocity. Basically it sounds random but there are predicatable — if chaotic correlations between each output. These are hear to screw with the parameters on modules to the right.
4 x Different Waveshapers
I wanted to check out various waveshapers — the Lindenberg VC Waveshaper, The Vult Debriatus, Lindenberg West Coast VC Complex Shaper, HetricCV Waveshaper . They each have their controls modulated by the Caudals.
Audible Instruments Keyframer/Mixer
The Keyframer is being used a mixer, but it’s unique in that you can record a bunch of different frame volume combinations (as keyframes) and then morph between them, either manually (with the big knob) or by modulation, also coming from the Caudal.
This is a DC Offset remover, and it’s there because waveshaping can introduce a DC Bias that messes with a signals apparent volume (and also messes with speaker cones). This is used between each waveshaper and the keyframe mixer.
Southpole Balaclava Quad VCA
To introduce some variety in the patch, the VCAs are used to modify the level of the signal. This is tuned to be mostly a slow throbbing.
AS DelayPlus Stereo Fx
What’s a modular patch without some delay or reverb? This stereo delay is tuned to long delays (on the order of seconds) so that the live signal is combined with the delayed signal. This adds some fat to the signal, and also introduces stereo panning.
The two implementations of the Turing Machine Sequencer — in the case of this patch, the one from the Skylights plugin — are not immediately understandable without doing some reading of manuals, which is never anyone’s favorite activity.
Turing Machine sequencer have a property that is one of the best about modular synthesis (or in fact music in general) in that it takes a single simple idea and implements it in a way that can have surprising and musically useful results.
There’s a full document describing what the Skylight folks implemented here, but I think I can describe it very simply. If you look at the byte symbol above, it shows how it is comprised of bits. A particular sequence in the Turing Machine uses this byte (or 16 bit word, maybe) in two ways.
The bits are rotated in the buffer. And by ‘rotated’ I mean that each bit is shifted left, and the last bit on the right is placed in the leftmost bit location. This makes sense if you visualize it physically. If you had a row of black & white marbles, you take out the rightmost marble, and place it in the leftmost position, shifting all the other marbles right one space.
In computing a byte is two things: a collection of bits, and the representation of a number in the range of 0 and 255 (or often, one of the ASCII characters).
The Turing Machine Sequencer uses those two representations to generate a pitch and a gate signal. The pitch is the numeric value of the byte, and the gate signal goes from zero to one when the rightmost bit is one.
That’s all that really happens, except for what the LOCK knob does. When the knob is fully counter-clockwise, every time the sequencer receives a clock, every bit in the sequencer’s byte is replaced by a new, random value. When the knob is at 12 O’Clock, half of the bits are randomized. When the knob is fully clockwise, the sequence is locked, and none of the bits change.
So when you use the Turing Machine as a sequencer you have a choice between an always changing random sequence, an unchanging sequence, and a sequence that changes gradually over time. This example patch comes with a locked sequence that sounds like a classic analog sequencer patch from Kraftwerk or Tangerine dream.
The output of sequencer is a tunable combination of chaos and order. It follows a very musical paradigm. If the LOCK knob is somewhere around 3 O’Clock it means that the sequence playing changes very slowly a note or two at a time.
It also has one of most charming features of modular synthesis: Because of how the pitches and triggers are generated, the pitches and triggers have a deep structural relationship. A change in underlying data byte changes both the pitch and trigger in a predictable way. Well, mostly predictable, as it does it’s magic by random, probabilistic bit flipping.
When two things in music have that kind of relationship, where they’re both tied to different views of the same input, it’s something you can hear. The sound of the SkyLights Alan Turing machine is the sound of that relationship.
Another about this patch is the quantizing setup of the pitch output of the Turing Machine: The pitch coming out of the Turing Machine changes at every clock step, so I run it through a sample & hold triggered by the gate output of the Turing Machine. This means that the note only changes when a new note is triggered. Then it’s quantized by VCV Scalar. I’ve selected notes that are a sort of 5 note scale, but different than the standard pentatonic scale. This is followed by a Fundamental Octave module, that transposes up or down by one or more octaves.
This is kind of a standard setup for most sequencers that I use, because I want things to add up musically, and I want one pitch per note. You can certainly bypass the sample & hold and go directly from the sequencer to the Scalar Quantizer , if you want the effect of the note pitch changing as it decays.
This is a method of patching and modulating delays I find so compelling I felt moved to write about it. This is all done in the software modular system VCVRack, and assumes you have a basic working knowledge of it. It involves the VCV Router plugin, which is non-free plugin from the makers of VCV Rack, but I consider it a mandatory purchase.
This is a single voice sequenced by a Fundamental SEQ-3 Module. Clock triggers sequencer, clock sends pitch to oscillator and gate to envelope. Envelope modules volume of oscillator signal via a Fundamental VCA-1. The only remotely complicated part is in the middle where the pitch signal is captured in a Sample & Hold, triggered by the gate from SEQ-3. It’s then quantized by a JW Quantizer and transposed by a Fundamental Octave module.
This sounds fun, and you can play with delay time and feedback. As it happens this delay module models actual analog delays to the extent that changing the delay time affects the pitch of the delays. If you load this patch you can hear this by turning the delay time knob.
What I’m interested in here is to set up a tempo synced delay. The AS BPM TO Delay Calculator can help out there. Drag the delay time all the way counter-clockwise (it will display 1 MS) and then feed the output of a particular delay time from the BPM Delay/MS Calc:
Now the delays fall in the rhythmic grid, in this case, a dotted quarter note after the dry signal from the VCA. The fun begins when you modulate the delay time. In this case I use 4 different outputs from the BPM Delay/MS Calc, for dotted half notes, dotted quarter notes, dotted 8th notes, and dotted 16th notes. You can select different delay times by clicking on the ‘Clock’ button on the Fundamental Router 4:1.
Now comes the fun part. I add a Hetrick Random Gates module, and send it the gate output of the SEQ-3 to trigger it. I also turn down the Max knob on the Random Gates so that only gates 1/4 are triggered. I then feed the first 4 trigger outputs on the Random Gates into the ‘Sel’ inputs on the Router 4:1. What is the result? Every time a new note is triggered by SEQ-3, a different delay time is randomly selected.
What is the result? Something rhythmically and harmonically interesting — it’s continually changes, and each time the delay time changes, it changes the playback speed and pitch of the delayed signal. Now, since we chose 4 differented dotted note delay times, they each have a relationship that is both harmonically and rhythmically coherent. A dotted 16th note is 1/8th as long as a dotted half note, and if you switch between them, the frequency jumps by a factor of whole octaves. In the case of dotted 16th to dotted half note, the transition drops the pitch by 4 octaves. If you haven’t considered the math involved it’s exponential: Twice the time or frequency, increase by one octave, 4 times the time/frequency, increase by two octaves, etc.
It gets even more interesting if you don’t choose delay times that are multiples of each others. Say, use dotted 1/2, quarter note, dotted 8th note and 16th note. The dotted half note is 3/2 the time of a 1/4 note, a dotted 8th is 3/2 of a 16th note. Now as it happens, the pitch releationship of 3:2 is a major 5th, so when the delay time changes it also changes the pitch by an interval that is musically interesting! I haven’t worked out all the pitch relationships between different note durations, but listening to the output, it always seems to add up harmonically, no matter which note duration you choose.
By now, people who care about the music of Richard D. James, aka Aphex Twin, know about how he dumped 175 (and counting) unreleased songs on Soundcloud. Like everything he’s done its a body of work that is at turns beautiful, frustrating, and obtuse. The majority of the tracks seem to be Aphex-esque techno and acid house, which is to say his unique combination of standard drum patterns with melodic flights of fantasy and piss-takes.
I had the idea of DJing with these tracks, and when I say ‘DJ’ I mean ‘arrange and blend tracks in Ableton Live’ — which isn’t proper DJing, according to many. That controversy aside, that is the easiest way for me to work; by not having to worry about synchronization and beat-matching, one is free to concentrate on the arguably more important parts of DJing, which is song selection and sequencing.
What started as a simple project to select some tracks to play in DJ sets turned into an obession, and I ended up ‘warping’ the entire corpus of tracks — 175 in total. There are only 173 on Soundcloud because 2 were withdrawn.
There’s a ‘Readme’ file in the project ZIP file explaining how to use the warped files, but the TL;DR instructions are “Unzip the mp3 files, unzip the Project, load the project in Live, and tell Live where to find the mp3s.” It should be self-evident to anyone who regularly uses Ableton Live.
Some observations after working through all those tracks:
1. Tempos are almost all very consistent, making me think that he used accurate clock sources & DAT recordings from very early on. There are a very few with the telltale ‘cassette stretch’ tempo drift.
2. There are several with ‘Sequencer Stop’ pauses where he stops the master clock device, allows the effects to decay, and then restarts the sequence off beat. This blows Ableton Live’s mind. I’ve fixed these as best I can, basically pinning a warp marker on the last beat and then dragging the point where the sequencer restarts to the next measure start.
3. Only a few had ‘intergral’ BPMs, i.e. 130, 140, etc. Meaning that the tempo clock was only accidentally set to an intergral tempo. Or the sequencer device and Ableton Live don’t agree about intergral tempos.
4. A couple of them were unwarpable, and I gave up on those.
5. This set of songs was a torture test for Ableton Live’s automatic warping, and I wasn’t impressed, even by the new 9.2 beta version which supposedly improved automatic warping. It rarely found the downbeat properly, was confused by beatless intros etc. Even though the tracks have a very steady tempo.
This was an interesting project to undertake, and it allowed me to ‘needle’ drop in every track. There’s a lot of impressive tracks in this collection.
This is a recording of two loops playing in Ableton Live. One is a percussion drum rack, the second is the U-He Bazille instrument run through several effects. This loop plays the same notes, but will never actually play the same one bar sounds twice, for two interlocking reasons.
First, both instruments go through a gate effect, which is adjusted so that the threshold is at the point of metastability, meaning that it spends most of it’s time on the cusp of closing and cutting off the sound.
Second, the Bazille patch uses random LFOs to modulate the levels of two oscillators as they modulate each other. On top of that, each of the two random LFOs is modulating the rate of the other, and the cutoff of a low pass filter through which the resulting signal passes. This accounts for the filtered noise sounds continually changing sound.
In addition, the two MIDI clips driving the sounds are modified by two different groove timings.
So the loop never repeats, and yet it also stays the same. The variety of the loop has musical value — in the same way (but not equal to) a human drummer adds vitality and interest to a repeated drum pattern with micro-variations of timing and dynamics. And the repetition of the loop has musical value, in the way a groove can entrain the listener’s mind.
It’s the wisdom of Heraclitus embodied: “No man ever steps in the same river twice.” It’s the same and not the same. Though I’m neither as wise as Heraclitus nor as musically talented as a significant percentage of humanity.
Sometimes you try something and it’s accidentally kinda compelling. The setup was
Eventide UltraVerb on one send
Audiodamage Dubstation16 on the second send.
This is straight up tracky. It’s live mixing/tweaking. I actually added effects and the anode while recording. There’s minimal EQ-ing on the Volca Keys and Volca Beats. I did some limiting and EQ on the mix-down and edited out the 16 or so measures where the anode was doing this unpitched farting noise.
Syncing the Volcas to Ableton Live is kind of wonky. It seems to work marginally better if you set the sync mode to pattern. The only way I found to get it tight was to hit the ‘play’ button a few times quickly. If you just hit play once, it always starts out of sync. Somehow resetting the counter to 1:1:0 a few times while Live is playing gets things lined up properly.
I’m doing two posts in one day after months of silence?
This just occured to me; I had to share.
1. Get a clip loaded. MIDI or whatever. 2. Click on the Groove hot-swap icon, and choose any groove:
3. Set Timing, Random, and Velocity in the Groove. 4. Set ‘Base’ to 16T. 5. Tweak the quantize control.
This will give your clips an adjustable swing; about 11% sounds pretty good.
For extra points, you can hack your own groove: 1. Make a clip with 16 16th notes — the actual note doesn’t matter. A closed hi-hat will help you get the groove right. 2. mess with the velocity of notes so that it has some ebb and flow type funk. 3. Apply the triplet swing groove, and hit commit. 4. Drag the midi clip into the groove pool. Your own custom groove!
There’s something that disturbs me about how people share information on the Internet — video. Particularly narrative ‘how-to’ type videos.
1. Low content per time invested — I’m not the fastest reader on earth, but for most videos, every ten minutes of video contains the same amount of information as a written presentation that would take me a minute or two to read.
And that’s not even considering the fact that no one makes a video without putting 2 minutes of useless introductory material at the front of it. Get to the point!
2. Video ties up your computer. I know you can have more than one window open, but in particular tutorials about music software make it difficult. If you have the video playing and your program open the audio output of the video will clash with your software, and you have to flip between windows.
3. I hate giving up control over pacing. If I’m reading something, I can skim through several paragraphs, stop and do something, slowly read through the tough parts, etc. With video, you have pause, and if you’re following a tutorial, you have to switch windows to pause, and end up fiddling with the transport a lot.
4. Laziness. Someone can have a general idea of what they want to do and start recording video, but they don’t have a script. You get some extemporaneous, diffuse description, parenthetical digressions, and plenty of ‘um’ and ‘uh’.
I am much more impressed by someone who WRITES IT DOWN, and then edits what they wrote to keep it focused and clear. It may well be that there are true artists of the instructional video in the world, but I am tired of people who make videos because they don’t want to put effort into consciously constructing what they want to communicate.
And this is not limited to amateurs. A lot of the professionally produced tutorial videos are no better.
With the advent of Windows 8, Sound Forge users may run into a registration brain fart that can’t be fixed simply. I ran into this on two different machines. The symptom is that Sound Forge works fine, for a while, and then forgets that it is authorized, and refuses to re-authorize, either on-line or off-line.
I don’t know what the minimum fix is, but following the instructions from Sony Tech Support below get you past this problem. It isn’t lost on me that this requires digging into things that 99% of Windows users are not comfortable with. Complain to Microsoft and Sony, not me.
We are still looking into the matter why this could not be registering properly on your system although all the information is being entered correctly. If you have not already, try using the registration repair tool for the program: http://www.sonycreativesoftware.com/download/link?id=3126.0 If this does not yield any results please follow the instructions for a clean uninstall and reinstallation of the application below. A clean uninstall is more indepth then a regular uninstall and will clear out any data of this application that may have been accidentally installed incorrectly through the application.
Before doing a Clean Reinstall, it is important to do the following: •All audio and video effects chains and presets will be erased, so if you need to make a back up of your presets please download our Preset Manager program. For more information about backing up presets: Backup and Restore Audio Presets | Backup and Restore Video Presets
•Safely disconnect any external USB or Firewire devices like hard-drives or dongles.
•Temporarily turn off ALL anti-virus programs, as well as disabling any Registry Blockers, Spy Ware, Firewalls, etc. These applications have been known to interfere with software installation and registration.
Start the process of removing programs go to Start > Control Panel > Programs and Features – find and remove your Sony Creative Software applications (ACID, Sound Forge, Vegas, DVD Architect, Cinescore, CD Architect or Media Manager, as well as any other Sony Media Software or Sony Creative Software programs).
Also, remove the Microsoft SQL Server Desktop Engine (SONY_MEDIAMGR), any and all Microsoft .NET Framework versions, and the Microsoft Visual C++ Redistributable software if it is listed.
Once un-installed, delete the following folders: •C:\Program Files\Sony\ (Do not delete this entire folder if you have other Sony applications installed such as Sonic Stage, Everquest, Star Wars Galaxies, etc. If that is the case then only delete the folder for the Sony Creative Software application you are using as well as the Shared Plug-Ins folder.)
•C:\Program Files (x86)\Sony\ (Do not delete this entire folder if you have other Sony applications installed such as Sonic Stage, Everquest, Star Wars Galaxies, etc. If that is the case then only delete the folder for the Sony Creative Software application you are using as well as the Shared Plug-Ins folder.)
•C:\Program Files\Sony Setup
WARNING: The next step will require you to delete Windows Registry Keys. The Registry is a very sensitive area to work in. If you are not comfortable with advanced configuration and system changes, ask an administrator to help you with this. (Related Topics: How to back up and restore the registry in Windows: http:////windows.microsoft.com/en-us/windows7/back-up-the-registry)
Next, open the Registry Editor. Select Start and type REGEDIT in the ‘Start Search’ box.
In the Registry Editor, locate and delete the following registry entries:
32 bit applications (like ACID and DVD Architect) installed in 64 Bit Windows 7 will also store registry keys in a different location. Locate the following registry keys and delete them. (Depending on which versions you have installed you may see one or more of these entries. If you do not see all of these, that is normal. Delete those which you do find.)
If you locate a folder labelled “Sonic” please DO NOT confuse this with Sonic Foundry. Leave it alone.
HKEY_CURRENT_USER\Software\Wow6432Node\Sony Creative Software HKEY_CURRENT_USER\Software\Wow6432Node\Sony Media Software
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Sony Creative Software HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Sony Media Software
Close the Registry Editor.
After removing all of the previous items, you may download and re-install from this link –
So today I got this interesting message from Soundcloud:
Our automatic content protection system has detected that your sound “Rubber Duckie (Wub Machine Remix)” may contain the following copyright content: “Get Some Fruit (Wubstep Dubstep Remix)” by Anand Bhatt, owned by Favorecido Productions. As a result, its publication on your profile has been blocked.
You can dispute this report, if you believe the copyright content has been mistakenly identified or if you have obtained all the necessary rights, licenses and/or permissions to upload and share this material on SoundCloud.
FYI I didn’t even remember uploading it to Soundcloud — it was just a joke that took about 5 minutes to put together. I kind of love how it turned out, since Sesame Street is embedded in my DNA. If you need to hear it:
There are several things that are awesome about this:
Soundcloud’s automated copyright infringement detector did NOT detect my actual ‘infringement,’ which was against Jeff Moss and Jim Henson, who wrote and performed the original Rubber Duckie. I claim this is fair use, but I’m not going to the wall on that; this was a JOKE track, it isn’t worth it.
Soundcloud’s audio fingerprint software did detect that there was some common source material in the Rubber Duckie Wubstep remix and that track by Anand Bhatt. That common material is there because Bhatt and I did the same thing: Took an audio file and fed it to the Wub Machine, which is a neat hack that ‘converts’ any audio file into bad dubstep. Feed the Wub Machine random songs, traffic noises, outgoing voicemail messages yadda yadda, and hey presto! Bad dubstep! it’s hours (well, minutes) of fun.
The most hilarious part of this debacle? This guy Anand Bhatt has released a digital EP which you can buy here on Amazon. Bhatt took what sounds like random crappy songs, ran them through the Wub Machine and released them as his own original ‘remixes’!
What conclusions can I draw from this?
Soundcloud’s audio fingerprint software is able to detect common elements in two songs. That’s great, but it can’t distinguish between one song sampling another, and two songs containing common source material. So it’s going to generate thousands of false positives. I guarantee that the worst-paid people at Soundcloud are the poor shmoes who have to wade through all the people contesting false positives for copyright infringement.
Anand Bhatt is a complete tosser. Don’t believe me? Visit his mega-awesome website, or his Amazon Store. All those pictures at the Grammies are curiously absent of any other people, as though he snuck in after hours to get his picture taken in front of the Grammy background. This man has been spending his time inventing an imaginary international rockstar career.
Here’s the transcendent, timeless, original “Rubber Duckie”
Ableton Live has a ton of effects. People spend a lot of time and money (or time looking for W4R3Z, which imho is wasted) to find third party VST instruments and effects to give them a palette of sounds. But before you go crazy buying and downloading stuff, it’s a good idea to fully explore the stuff built in to Live.
The Live MIDI effects are an under-utilized resource for creative sequencing, and the MIDI effect rack I’ve built does something that is to me really inspirational: It takes a stream of midi notes and randomizes their pitch and velocity.
That doesn’t seem like much except for this particular context: If you have a drum rack after this MIDI effect rack, when a MIDI note occurs, it adds a random offset to the note number, and assigns a random velocity. If you load a drum rack with an assortment of sounds — in the case of my example, latin percussion samples — it will generate endless variety of drum patterns with continuously changing accents.
From left to right the components of this rack are
Pitch Effect. Adds a fixed offset to incoming notes.
Random Effect. Adds a random offset to incoming notes.
Velocity Effect. Randomly changes velocity of incoming notes.
Velocity Effect. Filters out notes with velocity outside the range lowest to lowest+range.
The actual rhythm is determined by the note pattern that’s playing in the current MIDI track. This is cool because you can use groove templates on (for example) clip with a steady stream of 16th notes, and the output of the rack will follow the groove template. Every time a note is triggered by the clip, a random offset is added to the pitch, which has the effect of choosing a different drum sound, with a random velocity.
The Macro controls on the left side give you control over various parameters.
Lowest: notes with velocities below this value won’t play
Range: notes with velocities above Lowest+Range won’t play
Pitch: Constant offset added to incoming note numbers
Rand Velocity: How much randomness is added to incoming note velocities
Here’s a use case: If you play the third clip in the KW Conga track in the example ensemble, it is a steady stream of notes with a pitch of C1, which in my drum rack corresponds to the first sound. If you don’t want a hit on every 16th note, turning up the Lowest knob will discard notes with low velocity, and turning down Range discards notes with higher velocity. You tune the velocity range with these two knobs to thin out the incoming stream of notes by discarding some of the lowest and highest velocity notes.
The Pitch knob is to get around a limitation of the Random MIDI effect — it only goes up to a maximum offset of 24. Since I have more than 24 sounds loaded in the drum rack, in order to play any of the sounds more than 2 octaves above C1, I have to add an offset. You can also play this knob — or automate it — to change the set of sounds played by the incoming notes. In this particular rack, all the flams are at the top of the drum rack’s note range, so if the Pitch knob is below 8, you won’t get any flams.
The Rand Velocity knob, if turned to zero, doesn’t change incoming velocities at all. This would be useful in the case where you want the Velocity of the Groove template to determine note volumes.
All this is harder to explain than it is to use. Try downloading the example ensemble and fiddle with the knobs, and I think you’ll find that there’s an intuitive feel to using this effect rack. The main thing you need to start with is a drum rack — like the conga rack in the example — driven by clips usually consisting of C1 notes, which is the default lowest note for drum racks. The more sounds you add to your drum rack the more useful the pitch knob will be; if you only have 24 sounds, turning up Pitch will just cause notes to be sent to empty slots in the drum rack.
And if you don’t want to just let this sort of constrained randomness do its thing forever, you can record the output of the MIDI rack in another MIDI track, and then choose a few bars to loop, or find the 4 bars that’s almost perfect and tweak it a bit.
This sort of technique isn’t limited to drum sounds. If you’re using this rack with a pitched instrument it will do something random, and perhaps useful. With a pitched instrument, you can add a Scale Live MIDI effect, in order to constrain the notes played to the scale of your choice.
And that’s only the beginning of what you can do with effect racks. Live’s MIDI effect racks have the same ‘multi-chain’ feature of Live Effect and Instrument Racks — you can set up different chains of MIDI effects and use the Chain Select control to choose between them. And once you add in Max For Live MIDI effects, things can really get crazy.
The Random Multitap Delay is a delay effect that randomly, continuously changes the delay time between the input and output. The delay times are based on musical note durations – ¼ note, ? note, ? note triplets, etc. My goal was to use random processes in a way that preserves rhythmic integrity — the output stays in time with the input and any other rhythmic elements in the music.
Internally there is a multitap delay, whose delay time is a multiple of the current rhythmic division. If you select ? for the tap length then the first will delay ? note, the second 2/8 , the third ? etc.
The effect switches randomly between the delays over time, effectively re-arranging the input signal in time, shuffling it up. This is particularly effective on drums, because it will generate an endlessly varying rhythmic pattern that will still add up to the ear.
There are two identical delays for the left and right sides of the stereo signal. Since the current delay tap is chosen randomly, the right and left signals will be different even if all the controls are set the same.
It’s actually harder to describe what the effect does clearly than to understand what it does by tweaking the controls, and hearing the results.
There is a hierarchy of chaos in the controls of the Random Multitap Delay. I’ll list them from least chaotic to most chaotic:
Sync and Stepped On
With both sync and stepped set, every rhythmic division, one delay is selected. For example, if 1/8th is selected for tap length and 1/8th is selected for S&H, every eighth note a different delay tap is chosen.
Sync On, Stepped Off
Every rhythmic division a fractional value is chosen, that will select a blend of 2 delay times. For example, if the tap length is 1/8th and selection value is 3.5, you will hear a 50/50 mix of the 4/8ths and 5/8ths delays.
Sync Off, Stepped On
The delay tap selection varies continuously, based on Rand Speed, but only one delay tap is selected at a time.
Sync Off, Stepped Off
The delay tap varies continuously at Rand Speed, and a mix of two delay taps will be heard all the time.
The meter and numeric display below the stepped button shows you how these controls interact. They will show you exactly which delay tap is playing at a given time. The delay taps are numbered 0 to 7, since I’m a computer programmer ;-)
This chooses a base delay time for the multitap delay. These are standard musical divisions of time — ¼ note, 1/8th note, dotted 1/8th etc.
Controls the rate of change of the delay taps. Every ¼ note (for example) a new delay tap is selected at random for the output.
When this is on, the delay time is selected based on the setting of S&H. When it is off, the delays are switched between continuously at the rate specified by Rand Speed.
Chooses the speed at which the delay selection changes. The numeric value below the knob gives the speed in cycles per second/Herz.
Determines whether the delay selection is stepped (i.e. selecting just one tap at a time 0, 1, 2, 3…) or continous. If Stepped is off, you will hear a mix of two adjacent delay taps most of the time ( 0.3, 1.7, 2.1 …)
Controls the level of feedback for both the left and right delays.
Controls the amount of the left delay that is fed into the right delay, and vice versa
L FB Mode/R FB Mode
Selects the filter that is included in the feedback path of the delays. High Pass, Band Pass, Low Pass etc. ‘Bypass’ is also an option, which removes the filter entirely from the feedback path.
The difference between the left and right feedback filter cutoffs. At 12 O’Clock, L & R filters have the same cutoff. As you rotate left, the left cutoff reduces, and the right cutoff increases. As you rotate right the left cutoff increases and the right cutoff decreases.
As software projects go, PaulStretch is rather a shadowy enigma. Since I did the initial Mac OS X port, I’ve had very, very sporadic communications with the author Nasca Octavian Paul about it.
Then there’s the issue of versioning. Paul started a github repository, but it hasn’t been updated since March. It’s currently at version 2.2.2, but the only difference between 2.2-2 and 2.2-1 is that the version number it reports has changed.
At any rate, today I did a new build which is 1) OS X 10.6 (forward compatible with Lion, but perhaps not backwards compatible to Leopard or Tiger) 2) Up to date build, incorporating all of Paul’s changes. I also spent some time playing with it to make sure it works properly.
It also has the latest refinements of the build scripts used to build PaulStretch from source. I use CMake, which is Kitware’s cross-platform build tool. CMake keeps getting smarter, and my CMake recipe for PaulStretch will download all the prerequisite libraries, build them, and then download the PaulStretch source, build it, and generate an Apple App Bundle.
And CMake really is cross-platform — the same build recipe will work unmodified on Linux (which I have tested) and possibly on Windows (which I haven’t tried).
It’s hard not to be an electronic musician without developing a fascination with random/stochastic processes as a compositional tool. Particularly because when you pay attention to e.g. a Max Roach Drum Solo he seems to be balancing random choices with intentional ones. While Roach knows what he wants in broad outlines, part of what makes his playing great is that he has learned to simply allow his muscle memory and hind brain take over and introduce surprises. By letting go of a score and conscious control he’s participating in randomness shaped by his will.
Max spent a lifetime developing the skills as a musician to allow this sort of freedom in his playing. This demonstration clip is what happens when you set up many random Max For Live LFOs to modulate many, many different things. At the core, LFOs are modulating the Repeat and Grid parameters of a Beat Repeat effect. Then two more LFOS modulate the effect send levels, going to a reverb and delay. A third LFO is modulating the rate of the LFO modulating the Repeat parameters.
Then more LFOs modulate the regeneration level and ‘echo reverse’ parameters of the delay, and the size and predelay on the reverb.
One drum loop is the sole audio source for this. All this modulation introduces a currently fashionable sort of crackle where changing parameters introduces audio discontinuities.
The world of open source software development doesn’t sit still. A program that I rely on to build PaulStretch on OSX is CMake, which is an open source, cross-platform program that hides some of the complexity of building software on different platforms. If you’ve built any software on OS X or Linux you’re probably familiar with the “./configure ; make ; make install” method of working with source packages. CMake does that but it goes out of its way to handle the low level crap that is a pain in the ass to set up program configuration with autoconf. On top of that, it will run on any Unix, OS X or Windows. And on top of THAT, it will generate Makefiles, or project files for any of the commonly used integrated development programs like Visual Studio (on PC) and XCode (on Mac). CMake really is as close as you can get to ‘write once, run anywhere’ in the world of C and C++. Not that there won’t be platform-specific stuff you’ll have to do, but it’s a lot easier and more concise in CMake. Anyway, as of CMake 2.8, there is a powerful new CMake Module called ExternalProject. It automates downloading, configuring and building open source packages. I’ve used ExternalProject heavily in my day job, so it seemed natural to use it to streamline building PaulStretch. The result is maybe just as complex as the original build setup, but it is a lot more robust. Reading through the CMakeLists.txt files I’ve set up will be a good introduction to how things work in CMake — I’ve done a bunch of things in there you’ll want to know how to do for your own projects — use ExternalProject_add to download and build libraries, do some platform-specific configuration, create an executable, etc. You can download the new PaulStretch Build package here: http://www.cornwarning.com/xfer/PaulStretchBuild-2.1.tar.gz The instructions are pretty straightforward: 0. Make sure you have the compilers and development libraries installed on your system. 1. Download the Tar file 2. unpack the tar file, somewhere you have write permission. 3. Run PaulStretch/BuildPaulStretch.sh On OS X, this will create a paulstretch.app, that you can drag and drop wherever you want. On Linux, the executable will be in bin/paulstretch — it’s statically linked so it will run without needing anything besides the program file on your system. Or, for that matter, any other compatible Linux distribution. The result is an executable program in whatever directory you’ve run this process in. The following commands would accomplish this whole process in a directory called ‘PaulStretch’ in your home directory. mkdir -p ~/PaulStretch cd ~/PaulStretch curl http://www.cornwarning.com/xfer/PaulStretchBuild.tar.gz | tar xzf – PaulStretch/BuildPaulStretch.sh After running these commands, on OS X your PaulStretch program will be ~/PaulStretch/paulstretch.app. On Linux, it will be ~/PaulStretch/bin/paulstretch. As an added bonus, I took the time to try building on a couple of different Linux systems, to verify it works there. Once again, what will trip up the non-software-developer types in this whole process is step 0: making sure the dev tools are available on your system. That’s something I’m not going to explain here. Google it. You’ll need GCC installed, all the development libraries, and on Linux the development libraries for libasound — the ALSA sound library. If you happen to be a Windows developer, you could take a crack at building using Visual Studio or MinGW. The CMake build files are theoretically portable, but you’ll have to download CMake for Windows (here: http://www.cmake.org/files/v2.8/cmake-2.8.3-win32-x86.exe)I haven’t done this, because I avoid doing development work on my Windows machines at home. If I’m at home, and farting around on the computer, I want to be able to just use music software, not build it. Plus you can download the Windows version of PaulStretch here: http://sourceforge.net/projects/hypermammut/. Let me re-iterate again — I don’t want to be tech support for this — if you can’t figure out from this post how to use what I’ve put together, you probably shouldn’t even be trying to build it yourself. Ask your kid nephew who’s a big H4X0R to do it for you.