Jump to content

borfo

Programmer
  • Posts

    299
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by borfo

  1. With a headless version of the BLM scalar thing with a configuration file that you could edit to set up your SEQ and Launchpads (and ideally some handling of disconnects and reconnects), we could put together a standard Raspberry Pi OS SD card image that auto-launches the BLM app on boot...  It could be set up to mount with a read-only root so you could just unplug it to turn it off without worrying too much about filesystem integrity.

     

    It'd be pretty easy to add something in the OS image using mididings and a USB to MIDI adapter to translate USB MIDI to DIN MIDI and other MIDI routing, too...

  2. I got the BLM scalar app working on the Raspberry Pi (Debian) - I can use the Pi to run the Novation Launchpad BLM, it works pretty much perfectly, although it leaves some lit LEDs behind the current step indicator occasionally until it passes again.  No big deal...

     

    The compiled Raspberry Pi binary is here:

     

    http://www.boxx.ca/BLM-Rpi

     

     

    Let me know if it works for you.

     

    It segfaults if it is run from the terminal without X (TK, could this be made headless?).  So, set your Pi to automatically log in, startx, and then start the BLM emulator, and you've got a headless device to run the Launchpad BLM.

     

    you might have to install the following JUCE dependencies on your pi:

    sudo apt-get -y install freeglut3-dev libasound2-dev libfreetype6-dev libjack-dev libx11-dev libxcomposite-dev libxcursor-dev libxinerama-dev mesa-common-dev

     

    Needs a powered USB hub if you're running the SEQ and two launchpads from the Pi...  But you can power the Pi from the same hub.

     

    ############################

     

    To compile it yourself:

    On your computer:

    Open the BLM scalar app in introjucer.  Add the following in the Extra Processor Definitions section:

    JUCE_USE_XSHM=0 JUCE_USE_XINERAMA=0

    Save the project, and copy it over to your Pi.  I copied the whole MIOS32 directory over.

    On your Pi:

    sudo apt-get -y install freeglut3-dev libasound2-dev libfreetype6-dev libjack-dev libx11-dev libxcomposite-dev libxcursor-dev libxinerama-dev mesa-common-dev libjack0

    (JUCE dependencies)

    Enter the JUCE project's Builds/Linux directory.

    Recent JUCE builds require GCC 4.7 - my compile was throwing errors before I did this:

    sudo apt-get install gcc-4.7 g++-4.7
    sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.6
    sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 40 --slave /usr/bin/g++ g++ /usr/bin/g++-4.7
    sudo update-alternatives --config gcc

    ...and choose 4.7

    ...then "make".  The compile is slow, but it worked for me - took about half an hour.

  3. Cool.  Thanks!

     

    It seems that the UI doesn't give access to all parameters yet, right?

     

    Did you grab my code from my last post with attachments?  Everything should be working...  Does the version you were working from have the "humanize mask" thing (hold select to edit, mask is indicated by GP LEDs, and edited with SELECT+GP buttons, steps not in the mask don't get humanized)?  If you're working from that version, all the UI elements should work.

     

    Btw.: is this still a "humanizer"?

    Or a randomizer, or robotizer, or noodleizer? ;-)

     

    I vote for "Robotizer."

  4. Would it be possible/not-too-hard to add the following indicators:

     

    - round step buttons along the top light in dim yellow to indicate which steps in the currently visible measure have notes set? and

    - when ALT is held (to switch octaves), round track buttons on the right side light to indicate which octaves have notes set?

     

    If they wouldn't be too hard to implement, I think both of these indicators would be pretty useful to help you remember where notes are set that are not visible on the currently displayed steps...

  5. In making this, I broke (commented out) the bit of code in seq_humanize.c that checks for the random trigger layer:

     

      // check if random value trigger set
      if( SEQ_TRG_RandomValueGet(track, step, 0) ) {//RK - I broke this.
    //    mode = 0x01; // Note Only
    //    intensity = 4*24; // +/- 1 octave
      } else {
     

     

    ...and a bunch of my humanize variables are declared out of the OK CC range (above 127) in seq_trk_cc

     

    //RK - probability vars - out of normal CC range
    #define SEQ_CC_HUMANIZE_SKIP_PROBABILITY    0x80
    #define SEQ_CC_HUMANIZE_OCT_PROBABILITY        0x81
    #define SEQ_CC_HUMANIZE_NOTE_PROBABILITY    0x82
    #define SEQ_CC_HUMANIZE_VEL_PROBABILITY        0x83
    #define SEQ_CC_HUMANIZE_LEN_PROBABILITY        0x84
    #define SEQ_CC_HUMANIZE_ACTIVE        0x85
    #define SEQ_CC_HUMANIZE_MASK1        0x86
    #define SEQ_CC_HUMANIZE_MASK2        0x87

     

     

    There were a few small changes to some other files required to make this compile (commenting out saving the old humanize variables, for example)...  But the main stuff to get this to work is in the files I've attached above.

     

    I just listened to three one-bar tracks play more or less by themselves for like an hour...  This thing produces pretty musical results...  Lots of little subtle (and not so subtle) variations...

     

    ...edited the first post to add the .hex files, a brief description of the added features, and a picture of the UI.

  6. haha...  Wow, is this thing ever fun to code for.

     

    I implemented the humanize mask - on the humanize page hold SELECT and press the GP buttons to edit the 16 step humanize mask.  If the current step % (modulo) 16 is active in the step mask, then humanize will be applied.  If not active in the humanize mask, the current step will not be humanized.

     

    I also added an on/off switch for the humanize function.

     

    This codebase is great - most of this stuff "just worked" on my first try, and I'm more or less guessing at how to call global variables, and how to deal with steps...

     

    [files deleted...  current project.hex files are in first post]

  7. Wow...  This is pretty fun in combination with FX_echo and the DIR menu page stuff.  WIth a couple of tracks going, this gets very generative sounding...  neat.

     

     

    TK - is there a flag I could temporarily set so I could add a "NOFX" humanizer?  ...so that no further FX except force-to-scale (ie: no echo, lfo, etc...) would be applied to the current step before it's played?  ...or temporarily change the number of echo repeats?  ...or temporarily enable sustain?

  8. Ok...  I implemented a "skip note" humanizer, and added individual probabilities for each of the humanizers, as well as an overall probability that any humanizers will be applied.  The vertical bars beside the humanizer parameters represent the relative probability that that humanizer will be applied (so, for example, you can set it up so notes are changed more often than octaves, etc.)  Hold select and turn the encoders to set the individual probabilities.

     

    The numbers under each humanizer are ranges - eg: if Octave is set to 2, the note will jump up or down 0 1, or 2 octaves.  If note is set to 3, the note will jump up or down 0, 1, 2 or 3 notes, in addition to the octave shift.

     

    When each humanizer is called, the overall probability is multiplied by that humanizer's individual probability, and that number is compared to a random number.  If the random number is lower, then the humanizer is applied.

     

    I weighted the velocity randomization a bit in a totally convoluted "I don't know math" kind of way...  Bad math aside, it produces pretty good-sounding results.

     

    This thing is fun to code for...

     

    [files deleted...  current project.hex files are in first post]

  9. Cool - that was easy...  Still a work in progress, but added an "octave" humanizer (note now switches among notes close to the original note, and octave jumps octaves) and gave each of the humanize options their own intensity rather than just an on/off setting.  I've still got some work to do on how the humanization is applied, and I think I'll also add an overall "probability" setting (the chance that each of the humanizers is applied to each step) and a "skip notes" setting (the chance that a given note won't be played), but I got a basic implementation working anyway... 

     

    Is there any documentation on the gp_leds settings anywhere?  This stuff (lights the LEDs to indicate the selected parameter):

     

        case ITEM_GXTY: *gp_leds = 0x0001; break;
        case ITEM_HUMANIZE_OCTAVE: *gp_leds = 0x000e; break;
        case ITEM_HUMANIZE_NOTE: *gp_leds = 0x0010; break;
        case ITEM_HUMANIZE_VELOCITY: *gp_leds = 0x0020; break;
        case ITEM_HUMANIZE_LENGTH: *gp_leds = 0x00c0; break;
     

     

    Also, I could probably get away with using 4 bit (or even 2 bit for the octave setting) numbers rather than 8 bit numbers for some of these settings to save some space in seq_cc_trk.  Shorter number ranges would make the humanizers easier to use (no need to turn the encoder so many times), and I could store two settings in one CC byte that way...  Is there an example of storing more than one 2 or 4 bit numbers in the seq_cc_trk bytes anywhere in the codebase?

     

    Could I arbitrarily define seq_trk_cc numbers above 0x7f without breaking anything?  I realize these numbers wouldn't be accessible by CC, but if I'm just looking for a place to store variables for purposes of experimenting with the humanize function, would arbitrarily defining higher seq_trk_cc numbers work?

     

    Great codebase, by the way TK - it's really easy to work with, even not knowing much at all about C or hardware programming.  The toolchain setup is super easy too - I've had real problems with some other open hardware projects just trying to get to the point where I could compile anything, and the MIOS toolchain setup is super easy...

     

    [files deleted...  current project.hex files are in first post]

  10. edit: attached project.hex-es for the LPC and STM32f4 cores in case anyone wants to try this out - let me know if you notice any bugs.  The STM hex is untested - I haven't built my stm core yet.

     

    How to use the Robotizer:

     

    MENU+FX -> Robo - enters the robotizer page

     

    Robot on/off - enables and disables all robotizers for this track.

     

    Prob - the overall probability that any robotizers are applied.  This overall probability setting is combined with individual probability settings for each robotizer, so you can make one robotizer happen more often than others (eg: change notes often, but never skip them, and only rarely change the velocity).  Hold SELECT and use encoders to set individual robotizer probabilities (indicated by vertical bars beside each robotizer).  Individual probability is multiplied by overall probability, then compared with a random number.  If the random number is lower, then the robotizer will be applied.  A different random number is called for each individual robotizer.

     

    Skip - skip some steps (play them at zero velocity)

     

    Octave - changes octaves.  the selected number is the range - if "2" is selected, the note will jump up or down 0, 1, or 2 octaves.

     

    Note - changes notes.  Cumulative with octave changes.

     

    VelCC - velocity changes.  This robotizer is weighted using some bad math in order to avoid out-of-range numbers, and produce more musical results.  As a result of the way it's implemented, it's got a distinct character when the numbers are low (you'll get more loud accents if you keep the number low when you're starting with a velocity that's already relatively loud.), and a different character when the number is higher.

     

    Len - changes the gate length.

     

    Sustain - adds sustain to some notes.

     

    NoFX - temporarily disables Echo and LFO.

     

    +Echo - Adds echo to some notes.  Uses the settings on the Echo FX page, ignoring the FX enabled/disabled state on the Echo page.  In other words:  adjust the +Echo setting on the FX robotizer page to occasionally apply echo to notes, using the echo settings currently selected on the FX echo page.

     

    +Duplicate - Adds FX Duplicate to some notes.  Uses the settings on the FX Duplicate page.

     

     

    Prob, Skip, and NoFX do not have ranges, only probabilities.  The vertical bar and the number for these robotizers controls the same variable.

     

    ROBOTIZER MASK:

     

    - hold SELECT on the robotize page and use GP buttons to edit robotizer mask (indicated by GP LEDs).  Only selected steps will have robotizers applied to them.  Mask is 16 steps long, so it repeats from measure to measure.

     

     

     

     

    DjjYHRg.jpg

     

    ###############################

     

    As a first MIDIbox programming project, I'd like to screw around with the humanize function a bit...  I'm thinking I'll duplicate it, call it humanize2 or something, and add a few enhancements.  To start with I think I'll make note, velocity and length each have individual intensity settings rather than just on/off, then maybe add a few other features, like probability of playing each note, delays and repeats, etc.  Then, I was thinking I might add a "humanize mask" - use the GP buttons to select steps in a measure, and the humanizing would only apply to the selected steps (so you could set it up to only humanize certain steps in each measure).

     

    From a look at the code, at least the first couple of items on that list look like relatively easy projects for a C noob.

     

    I'm wondering where I should store the data for this though - looking at the code, it seems that seq_cc_trk (defined in seq_cc.h and .c) is used to store the track data that's accessible via CC - eg: the current humanize values are stored there...  I don't want to interfere with the already defined CCs, and there aren't too many free slots apparently.  Since this is just a screwaround project I don't need my variables to be accessible by CC.  Is there another struct where track variables not accessible by CC are stored that I should use?

    STM-project.hex

    LPC-project.hex

  11. I am having the same minor problem described in this post:

     

    ...my encoders, when turned slowly, sometimes jump around...  If I'm turning them slowly in one direction, numbers will increment several times, then decrement by one, then increment again.  Not a huge problem, but it'd be great to fix it if I can.  Is there any documentation anywhere on what the encoder types described in the MBSEQ_HW.V4 file do?

     

    ##################################################
    # Encoder Functions
    # SR = 0: encoder disabled
    # SR = 1..16: DIN assignment
    # Types: NON_DETENTED, DETENTED1, DETENTED2, DETENTED3
    ##################################################

     

     

    I'm using the detented encoders SmashTV was selling in his shop until recently - they're not listed in the shop right now, I guess he's out of stock.  They're currently declared as DETENTED3 (that would have come from one of the HW file templates, I didn't set it myself).  Does anyone know what encoder type setting I should be using?

     

    ...just tried DETENTED1 and DETENTED2, both are worse than DETENTED3.

     

    NON_DETENTED doesn't jump backward, but the increments obviously don't match the detents, so it's not usable...

     

     

    In the forum thread linked above, TK mentions DETENTED4 and DETENTED5 - when I try those, I get the following error when uploading the file in MIOS Studio: "[269409.830] [sEQ_FILE_HW] ERROR in ENC_DATAWHEEL definition: invalid type 'DETENTED4'!"

     

    There seems to be some discussion here, but it's in German...

  12. Haha...  I didn't mean the list of stuff in the "Settings View" description to be feature requests to implement now, I just meant them as examples of how that view could be expanded in the future.  I meant that for now, just two columns (MUTE and SOLO) could be set up, but that that implementation might be better than the others because it leaves a lot of room for more features to be added later.

     

    I do all my muting and soloing on the frontpanel right now anyway, so implementing it on the BLM isn't particularly important to me.  I was just putting a few ideas out there because I think this could probably be implemented without using up two right-side control buttons just for the mute and solo functions...

  13. Solo's should also be additive I.e. More than one track can be sole'ed simultaneously. I haven't tried Soloing tracks on the MBSEQV4 itself yet so I don't know if that's the default behaviour.

     

    Soloing is additive.

     

    When I use the BLM, my SEQ is always close, and I use the frontpanel of the SEQ to do some things (like muting), and the BLM to do other things.  If you're not currently using both together, you should try it - there's a lot that you might not realize that the BLM can do...  For example, try switching parameter layers on the SEQ frontpanel while you're in grid view on the BLM...  Or switch trigger layers on the SEQ while you're in Track View on the BLM (you can also switch trigger layers directly from the BLM in track view with ALT+track buttons).

     

    The BLM also makes using the SEQ frontpanel easier in certain ways - eg: using the top buttons to switch step views is a lot easier and more intuitive than using the STEP button on the SEQ frontpanel.  Switching the displayed steps on the BLM will also switch the SEQ display.

     

    The SEQ frontpanel (especially now after the most recent firmware upgrade, which adds new sync mute to measure functionality) offers a lot more muting and solo functionality than the BLM does.

     

    Not everything needs to be on the BLM (I'm thinking transport controls probably don't, for example)...  The frontpanel is a great interface once you get used to it, and using the frontpanel and the BLM together is very feature-rich.

×
×
  • Create New...