mirror of
https://github.com/airwindows/airwindows.git
synced 2026-05-15 06:05:55 -06:00
[GH-ISSUE #66] Strymon BigSky! ^_^ #52
Labels
No labels
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/airwindows#52
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @glowysourworm on GitHub (Feb 5, 2026).
Original GitHub issue: https://github.com/airwindows/airwindows/issues/66
Hello! (sorry I wasn't sure where to put this as a "discussion")
I'm new to synth programming; and am wondering where to get resources on effects processing. Do you have
any good books, journals, or subscriptions (online even), that have good, technically based, information?
Also, I'm curious if anyone has attempted to re-create Strymon's BigSky effects unit. Any clue what sort of
algorithmic reverb is being used there? I have attempted both a Schroeder / Moorer reverb effects chain with
pretty poor results. They're very metallic sounding due to the frequency response of the comb filter.
A quick question on real-time / platform audio processing: What's the best method of getting RT output? Are
there base libraries that are preferred to use? Do you know what base libraries are being used for Pro-Tools? Do
you have any experience with RT Audio? Are you familiar with its separate thread (v.s.) blocking thread usage?
Do you know of any differences between Windows and Linux, for audio processing, that are of concern? And,
finally, did you program an NAudio wrapper for your (VAST) array of plugins?
Best,
~gsw
@airwindows commented on GitHub (Feb 5, 2026):
No, this is as good a place as any I guess.
No I don't, I largely code stuff by ear and there are basic things (like FFT/spectral processing) I don't like the sound of and don't know how to do :)
Doubtless someone has tried to do BigSky, but I won't be trying, I do other stuff. I did make my IntoTheMatrix program in the Godot engine for the purpose of exploring how to combine delay times using ONLY comb filters, and some people (including me) like the direction this has gone. But there are many other directions, for instance getting involved with allpass filters. Since allpasses seem to like using 0.618 in their feedback I like substituting the full-on golden ratio, but the original Midiverb couldn't divide and had to use 1 for feedback, something I played with in MV and MV2, so you can experiment with both types of allpass combined with the comb filters. Matrixes (notably Householder, or anything you can get good infinite sustain out of) are good for reverbs as the larger the order (4x4, 5x5, 6x6) the more literal reflections you get out of the end, but also the more complicated it is to work out what the delay times are, which is why I made IntoTheMatrix.
I have no idea how to get the best realtime output other than coding the most primitive C versions of everything, which is what I do. Don't allocate memory when processing, or get 'clean' with your code because the processor doesn't care, it'll just choke if it has to be done but is stuck in a function calling chain or doing something with allocations, 'good' randomness or recursion. That said, there's common factors: it used to be you'd declare all your variables and cache the heck out of everything, but modern processors do math faster than they look things up, making memory a possible bottleneck. Declare local variables so they can be used and then forgotten, do complicated math rather than neatly put stuff in variables and functions.
I know that I had to quit calling rand() for noise and use my own (in my floating point dither section at the end of all the code) because on Linux rand() was higher quality and could lock up the audio thread, because the call would not return until it was a really, truly random number, which could be a long enough pause to drop entire samples while it randomed. Figures that Linux would have the real-deal ultimate rand() and just use it for everything. I did not program any sort of wrapper, they're just off some simple templates for AU and VST, and only the audio code changes. This has the side benefit of, when someone like Baconpaul or one of the Eurorack DSP host projects uses my codebase, they can generally write scripts that will work on everything. I'm not as good at writing scripts (or code) but I can make my output simple and predictable, allowing for this use. Cite Airwindows under the MIT license if you use the codebase, and have fun :)
@magnetophon commented on GitHub (Feb 5, 2026):
Some general DSP resources: https://github.com/BillyDM/Awesome-Audio-DSP
A good real-time article: http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
@glowysourworm commented on GitHub (Feb 6, 2026):
@magnetophon Thanks for the links! That's very helpful!
@airwindows I'm going to have to come back when I've read on some of your libraries. I'm definitely trying to use low-latency approaches to programming with these effects; and interested in completing a terminal-synth with a portable audio library.
Last night I traded RT Audio for Port Audio when I found out RT Audio doesn't support its blocking mode anymore. Essentially, these libraries have two modes: one calls your code when it's ready for more samples on the audio buffer; and the other allows you to manage the buffer writes - while handling your synth code (on the same thread).
I think I'm going to have to go with Port Audio. Its much more detailed and picky (which is more difficult) but, in the end, you'll have much more information about what is going on in their audio stream; and variable writes to the audio buffer - with more control.
I will take a look at your IntoTheMatrix when I get some time; and I would love to see how your plugins work with a terminal synth. I almost got your kCathedral5 online last night; but the MSFT compiler failed with link errors. I'm going to give it another try after Port Audio is done. If I manage to get the plugins to load, I'll send you a screen shot! ^_^
Don't work too hard! If you need a rand() function, let me know, maybe I'll stumble across some old C code that gets it from the system clock or something. Actually, there's at least two libraries to try.. if you're working with STL, or boost, they both probably have decent performance.
I do have something you may want... A very accurate clock! It does require the STL library, though. I'm not on Linux / boost (yet).. mostly because of work. If you need something like this we can talk it through, here. I'm using one for my terminal synth.
-Best
@glowysourworm commented on GitHub (Feb 9, 2026):
@airwindows Hello again, I finally managed to get your plugin running! To which I promised you a screen shot - so I posted one in my console-synth project. (The "Reverb" is your kCathedral plugin).
I have a some questions:
How many samples do you generally send to your plugins? I'm currently feeding in a single L/R sample one at a time - which seems to work. I'm just wondering about audio quality, and how you built your library. What sampling rates do you usually use?
Actually, (and I'm assuming this is true for all your plugins), do you use float-32 L/R sample formats across the board? Are there any differences? Some device formats vary; but I'm not seeing any issues just using a float-32 L/R format.
Can you recommend some good effects for synth's? Are there some that are built just for acoustic instruments? I'm finding some nasty metallic ringing for synth reverb; but this is only a first try. I'm going to try loading all your plugins using a separate project that I can link to (I had linking issues with your code; but I think they're just related to windows sdk versions.. hard to know without seeing it fail, so there'll be another base lib for your plugin set.. as long as I can get it to build without hand-correcting the errors (!!!)) (you have a lot of plugins!)
I believe you have sound sample kits, also. I'm wondering about basic MIDI voices - like a piano tone - etc.. I'm assuming yours is quite extensive. Do you have an easy way to do those as well? (I'll take another look..) Also, are there presets of synth voices that are "for that purpose" (mimicking instruments)?
Best,
~gsw
@glowysourworm commented on GitHub (Feb 10, 2026):
@airwindows Some very good RT Audio results!
Had to pass this on. I was just able to reduce the frame buffer size in RT Audio from the MSFT recommended 512 frames all the way down to 3 frames, before it started to distort (at 2 frames)!
This is far lower than I thought trying to make sense of WASAPI (using Windows.h)
So, my "very accurate timers", I thought, were showing me the system latency - which was ~10ms for the front end to process. Now I know where that time was spent: (backend = RT Audio -> OS, frontend = Code you write to process audio, UI frontend = UI thread code, Audio = RT Audio thread + your audio callback) (these help to understand the timer output)
The frame buffer hand off (inside the "backend") was taking most of the time. So, copying those 512 samples was THAT EXPENSIVE! ~10ms is a LIFETIME for the OS to respond.
For the 512 frame callback, your kCathedral effect took ~300us to process, which was approximately what other of my buffered effects took (FIR / IIR effects)
The 3-frame callback takes ~56us - with FULL EFFECTS ON...!!! That's incredible performance! Your kCathedral MAY have taken ~1us if any time at all, which was border-line noisy in the timer (if you use nano-seconds you can change units on those timers)
The UI time being on a separate thread very much pays off. So, if you write an audio program, you probably will have better results using a callback structured application instead of blocking. However, you may have more control using blocking (synchronous) programming. I also throttled the UI refresh rate to about what it's timer output is. The synth updating also is with the UI timer (which you'll notice). So, if you play with it at all you should have a pretty good idea of RT Audio's performance.
So, the "latency" that everyone complains about with RT synth's on a desktop is actually BACKEND latency. I'm not sure whether other host API's allow you to have more control over the backend; but that would be where the next layer of the RT audio onion is.
Anyway, that's not thoroughly discussed on most threads I find online related to basics.. So, there would be some confusion about what "latency" means. I'm planning to leave the frame buffer size as a command line variable; and am wondering about other DAW's and whether they allow you to configure your audio input/output interface with respect to "latency".
(I'm putting out an updated screen shot if you're interested.. the timers should all be on the top right now)
Thanks
@glowysourworm commented on GitHub (Feb 13, 2026):
@airwindows Sorry to bother you again, but there's the next topic I'm concerned with - which is instrument synthesis. If you have a good place to show an example, that would be helpful.
I'm studying part of a very good synth project: ZynAddSubFX, which has page on their PADSynth here:
https://zynaddsubfx.sourceforge.io/doc/PADsynth/PADsynth.htm
I believe I understand how a wavetable functions - which is simply to load a single period of the wave - and then to interpolate its sample during playback.
If I were to try and get their example above to work I'd have trouble trying to figure out their signal chain. So here's one thought experiment:
So, I don't think these wavetable synthesis problems are all that hard to figure out; but I don't have a good comprehensive source, with examples, to do a simple beginner's project. But, I do have your other resources on DSP, which have been very useful, so far.
Just need some sort of source on wavetable synthesis.
Thanks.
Oh, if you've ever played around with their samples, I figured out that they store all their files as zipped XML, so just rename them as .zip, and unzip them to get the XML. Otherwise, you may not be able to figure out their source code. (these would be their .xmz files or their sound bank files) (if I could get their sound banks working I'd include them as examples; and have a very nice library off the bat.)
@magnetophon commented on GitHub (Feb 13, 2026):
Not sure how suitable it is for a beginner, but this is a great talk on implementing wavetable oscillators: https://www.youtube.com/watch?v=qlinVx60778
@glowysourworm commented on GitHub (Feb 19, 2026):
@magnetophon Thanks for the link. It was a good talk. I have a couple of questions about airwindows effects plugins, also.
I got your plugins running, and can now chain them in a signal chain. You might want to know that there's a compile / link issue for an updated windows build related to the Windows SDK (8.1 -> 10.x) that prevents the build. So, I had to pull your code and put it in a separate project. But, with very few changes, I managed to get it running.
I'm using wave tables now for synthesis; and am probably going to add some instrument samples to the oscillator section. But, the audio quality of these is still a bit finicky. So, I'll have to do some reading on how to nail down the sample interpolation (I believe I have introduced some harmonic distortion there because of mis-aligned samples)
It's very close, though. If you have some details on your buffer sizing for using your effects, I'd greatly appreciate them. Thanks!
@glowysourworm commented on GitHub (Feb 20, 2026):
@magnetophon (correction) (found issue running plugins)
I had a frame-related issue with the plugin wrapper I created for your plugins.. They're working, but the project is being rebased from console-synth to TerminalSynth. Using sound samples with them should show more "realistic" output, since you've spent so much time worrying about audio-related, recording-quality effects.
You guys are probably getting sick of me; but I would like to say that I have your plugins as a part of this project. The output will end up being a very small terminal application. But, I'd like to learn how to install and use your plugins appropriately, and have to produce a CMake release build. So, this will take some time to get right.
Perhaps we could talk a bit about this; and I would offer that there may be a way to help out your project with a portable build (?)
Are you in need of a portable build for your plugins? I could show you what that link error was about; but you have very little that should prevent portability. (the CRT header was being used for, I think, just your use of PI (the Math::PI, and Math::PI_2 symbols).
I would very much like to know how you were running your plugins. Were you using JUCE? I haven't tried working with this framework, yet. Perhaps they have better RT audio libraries than RtAudio, or Port Audio (?)
Anyway, I put your link on the TerminalSynth page, so at least other users will know.
@glowysourworm commented on GitHub (Mar 14, 2026):
@airwindows Ok, so just another update. I'm still bugging you guys because I have questions about how to get started on a synth project. I also just got JUCE working and saw their amazing demo, and development framework! (I would assume you want your
plugins to fully utilize JUCE.. correct? They have a launch pad for making a VST AXX plugin in their "Projucer")
So, I'm assuming you have been where I'm at and have already figured out some of these things:
JUCE has an extensive audio framework; AND OpenGL UI framework. (does it use PortAudio?) I've managed to
play with their demo; and could see an easy way to develop a synth project. However, I'm looking to keep my
TerminalSynth project smaller and very focused on just building effects libraries; synth signal chains; and instrument
voices (also choir).
(RT Audio / Port Audio) give you wrappers for all the prep work involved with native audio IO. I just today finally
figured out my Port Audio issue; and have full bandwidth (clarity), FINALLY. My issue was that I was trying to follow
the stream time to prepare samples. The better solution was to just use a frame index cursor. The backend will
scramble to make sure you have the right frame count if there's any possibility of an underflow (Port Audio has
an option to do this). But, as I've said, your frontend processing code has an eternity to complete before you need
to provide samples to the backend.. trying to use a "delta-time" didn't entirely work.. so it just sounded muffled, or
distorted on the high end by a very small amount.
I also was able to get PADsynth's IFFT algorithm "functioning"; but that was also part of today's fix-port-audio
solution. So, next will be to put the synth back in order (I'll put an audio sample on my site later. I particularly like
using Galactic3 + Sawtooth). But, the type of synthesizing he's discussing on his IFFT site is ... "proprietary".. Although,
the method is discussed (I think I pasted the link above somewhere).
So, if I can get the IFFT method going, I'll parameterize it and add it to the oscillator section - so people can see it
work and own his bunk-a** >_<
Did you manage to get your linux build working? Did you need some help with CMake, or JUCE? I'm going to work with
JUCE sometime later on; but I have their CMake building.. just haven't worked with their code or libs yet. Actually, I don't
see their libs in the output; but I'm not going to spend more time with JUCE right now.
Actually, I have my own CMake to do, along with making the build completely portable. This will require a wrapper for "getting
the key strokes" from windows / mac / or linux. There will be details for Port Audio; and I'm going to remove RT Audio because
it's not as well supported.
If I were working on a UI-based pretty-synth, I'd spend time with JUCE from the get-go. My version is the kick-the-lion-mama's
a** grass-roots own it your own way .. and work on one piece that will function. (i.e. synth effects, plugins, things that will
transfer onto another application). So, we'll see how it goes. JUCE seems to have everything else done for you; but you're
stuck with what they have; and you have to learn their whole build / development system.
It might be easy to wrap an effects library for JUCE, in case you wanted to do that. Your VST (AXX ?) plugin would be easy
to do. Then, I would expect all other "DAWs" pick up your plugins if they're installed and registered somewhere - like
Audacity would be one. I would definitely build and install your plugins for my Audacity installation. (I play guitar and
do some very simple home recording)
~gsw
@airwindows commented on GitHub (Mar 14, 2026):
Um. There is only me. I'm one person in Vermont doing literally everything that's Airwindows apart from Consolidated (that's Baconpaul, not me), and I am still putting out one new plugin a week on retro AU, retro Win32 VST, Win64 VST, modern signed AU, modern signed Mac VST, Linux VST and Raspberry Pi VST. When I drop something like ConsoleH, that's a JUCE product done using Sudara's Pamplejuce CI integration for github. When Consolidated's build breaks, I have to ask Baconpaul for help: when Pamplejuce doesn't work I have to ask Sudara for help.
Hope you don't mind that I'm just reading these reports of yours? They don't seem to have anything to do with me or what I do :) if you're able to say 'I would assume you want all your plugins to fully utilize JUCE, correct?' then you are not paying attention as well as you could be paying attention. But then you thought I was a team, which is gratifying :)
@glowysourworm commented on GitHub (Mar 14, 2026):
Ok, well I haven't gone through all your code yet... I was just wondering how your build process worked because there was a windows SDK issue which I solved by removing your CRT dependency. (your M_PI variable was the only portability issue. you probably don't need CRT, that I'm aware of, it's an old windows API library)
I was just trying to help out while learning about JUCE, also. So, for my project, I have just put in simple "AudioController" classes to serve up RT Audio, PortAudio, and probably JUCE's backend. I would like to learn JUCE's build process, also; and take time with your code; but this still be a while - as I'm just getting things in order. Also, JUCE is an entire UI/audio framework - not just the native IO portion. I was not planning on building anything extravagant, so did not want a larger framework, necessarily. It may be that it works out better; but the only way to know is to try building a simple one yourself, first.
So, as for the use of Airwindows, I wasn't sure what to do for my project. I got your code working; but won't be able to download your new code and update my project with just your lib files (instead of the source). So, that's one part of my project. I'm also planning on learning about VST, and these typical "AXX" plugin designs that are part of JUCE (for example), and how they work together with other programs (like Audacity, which I use).
This will include putting Airwindows plugins with Audacity for my home recording. It sounds like you've already done that, then? I wasn't sure what to do. If it helped out, I'd contribute to your project; but, for now, I have to read your source code, get acquainted with JUCE, and put my synth project in order to show a (proof of concept) build.
I'm willing to talk about any of this as it pertains to Airwindows. Thanks