Jump to content

PYRATONE Sound Engine available for B2C


Recommended Posts

I am going to make parts of my Pyratone sound system available for DIY projects - possibly as a replacer for the SID :-)

Currently I am designing a system to incorporate both, the sound engine as well as dsp-processors into one FPGA. (My latest system with a Cyclone IV and Spartan beeing used on stage covers about 6-8 FPGAs altogether and runs on eval boards of Altera and Xilinx which is not suitable for normal designers or musicians).

Now, I first found a module which seems to make it possible to offer functions for a reasonable price beeing interesting also for DIY.

Several parts of the system are already running in professional systems for audio test, ultra sonic and radar apps.

For a first step, there will be a demo board which can be purchased or self built with audio IO apps to test the engine. According to the need of the musicians I will select the detailed functions, so I am starting discussion now to be able to react on musicians demands.

What is planned:

4 parallel instruments with at least 64 channels, maybe 128 - depending on RAM and FPGA. At the moment 64 channels.

The 64 channels incorporate  61 + 3 voices

61 voices behave like a piano, meaning 61 times polyphone, 3 voices behave like a mono synth.

This leads to 4x4 instruments, where 4  can be pianos, organs or keyboards, and 12 individual monophone voices.

Each voice has 4 Oscillators and 6 sound paths in parallel causing harmonics, FM modulation and more based on classical and parametric wave forms.

All patchs can be routet to 8 channels, leading to voices which can be moved spatially.

All voices and parameters have their own ADSR and LFOs. So pressing keys after each other will start individual LFOs and sound behavour.

OSCs have up 768kHz sample rate, LFOs up to 16384 sample rate.

All parameters have 10 bits instead of 7 bis  resolution.

Sounds and Parameters can be modulated in real time from MIDI and UART (PC).

Altogether there are (LFO + ADSR) 4000 units running at the same time

Reverb is added dynamically creating more sound sources. 

Altogether 4 x 6 x (1+2) sound sources are available for one sound to drive the 8 channels, so one sound of any key will appear in different ways at different points of time

The placement runs according virtual microphone placement according to ORTF and AB strategie causing spatial effects

The 8 channel sound processor can by layered in that way the 4 instruments create 4 different spatial scenes

MIDI Layering makes sure that you can attach 4 keyboards, split and joyn them that way that it is possible to drive 16 different sound scenes the same time. so a 96 Key Piano can be emulated with two real keyboards such as one keyboard could drive a dual manual organ with 2x126 keys

According to the demand and place in the FPGA, also MIDI ARPs, Sampling, Wavetable and BEATMORPH is available. See www.96khz.org regarding my projects.

I also might add the Graphical Oscilloskop and Graphical Equalizers and Master Compressors is possible. 

Highlight: A real time audio to midi converter for guitar and human voice to drive the synth directly.

Planned: Support for polyphone after touch and USB to interface to Roli Seaboard, Touch Keys and similar devices.

More details to come at WWW.PYRATONE,DE


The system makes use of some routing and programming technologies which are only possible in FPGAs and can in no way be made real in DSP-Systems. Several design principals regarding detailled FPGA design are even invented by myself like 3D pipelining and static roll off FSM and totally unique, nowhere documented and only available in my FPGAs system.

Technical details, so far as planned / realized

24 Bit precision frequency input setting  with high prescision of less than 0,1 Cent deviation

Frequency range 8 Hz to 24kHz (consumer) or 0,2 Hz to 300 kHz)

MIDI timing up to 375 bpm with precision of 0,03 Hz - self synchronized, externally synched, manually synched

Recognizes Note On, Off, Value, Velocity, Aftertouch, Channel Number ...

4x Normal MIDI (31k, DIN5)  and  4x fast MIDI 2000 (3MBit via S/PDIF)

256 MIDI Notes recognized, 4 tables for MIDI note to Frequency conversion including  "Bach pure", "chromatic-balanced" , "piano spread" and "PYRA86" (my 196/185 tuning) - reprogramable!!!

Global tuning, Fine Tuning, Vibrato and the typical functions of keyboards

Harmonics, Distortion, FM. Cutof and the typicall functions of synths

Wavetable according to resources in FPGA, at least for 2-3 parallel voices, 4MB sample RAM

Envelopes with ADSR and /OR LFOs for all filters and parameters in parallel per key.

Reverb with 4MB 32 bit resoution, 3D Placer

8 channel outputs  at 24 Bit / 96kHz  with simple PDM analog, I2S,  DSD256 and TDM96-8, 4 extra channel for double bass aray


* all parameters are preliminary.


Edited by engineer
Link to comment
Share on other sites

I also plan a VST plugin which drives the instruments from within the known plattforms like cubase. As shown above, there wil be 16 channels with 256 musical voices polyphony at least. If this is not enough, another FPGA can be added to double them :-)


There are some more functions wich are not available in DSPs or Keybaords yet which I do not want to decribe now.

If you have any ideas what might be missing, please add a comment.

Link to comment
Share on other sites

Most probably, yes. My sound module should well fit to MIDIBOX.

I recognized quite a number of interesting applications developped here by several users, whereby some of them already seem to beat existing MIDI controllers' functions.  As you might have seen on my website, I also had always been developping MIDI controllers myself, in order to fulfill my enhanced requirements, which are speed and resolution. Therefore I developped MIDI over audio and MIDI over S/PDIF to shun USB- and latency problems. I used ES232 with 230kHz and S/PDIF to become quick enough. There is a detailed report on my site regarding "the limits of MIDI" and "enhanced MIDI protocol". See "MIDI 2000".

I also wanted to overcome the poor resolution of 7 bit MIDI for the controllers. My latest project was the shown 31 element MDI controller beeing able to drive 10 Bits and more, by several turns. It also has a prcise optical feedback with the LEDs.  Unfortunetly this is quite expensive and hard to realize for DIY.  So I stepped back to visual output via VGA again and mouse control and and currently am about to evaluate touch screens for easy parameter input. USB and driver issues are  the current obstacle here :-(

Parallely I need MIDI input from controllers though to easily control that in real time on stage. Regarding MIDIBOX, it appears to be a good solution to do this. I expect it to be reprogramable to drive a 10 Bit precise output somehow.(?) An easy UART output with eg 115kHz should be enough.

In return, my module might be interesting for MIDIBOX users to, in case they like to have a superior sound engine.

My engine is da facto a synthesizer + an attrached multi functional audio dsp at scalable resoultion an qualtity. Currently it is only used for audio test equiment and industrial applications I have created for my customers. The audio synthesis for music so far was only for myself. For certain reasons now it is possible to offer this also as B2C. Details have to be figured out.

The target is an easy platform which accepts MIDI oder MIDI-like information and creates 8 channel audio at anbeaten quality for live performances or professional producers who need more that software sound synthesis. At first glance it is just a FPGA module with some Pins. But it has much performance:

As an example:

In the full version I am using the system in the Cyclone FPGA version 2011, there are hunderts of internal RAMs and Registers running and accessed via 200MHz driving 1024 channels at 192kHz. The RAM-Performance compares to >100TBps bandwidth when storing variable, which means, it required a number of around 50 PCs with current DDR4-Controllers to do the same number of calculation steps for softwar sound synthesis. ;.)

The basic osciallation module for example is used for a radar ultrasone emulator creating virtual wave reflections in real time and is around 100 times faster than the prototyping platform (VPX I7-CPU with 4 Cores) which was programmed with MATLAB. This is because of the parallel processing options in FPGAs.  The first module I created this way in a Spartan 3 device is now > 10 years old and not under any NDA-resctrictions or any other legal constraint, so ... let's make music :-)

During the years I created many projects and Cores wich might be usefull. The task now is to get them together in a reasonable shape to be easily controlled.


In easy words: It is a kind of intelligent "SID" and can be programmed that way. (A SID was the first, I programmed in VHDL too and it was also the "synth" I started with in the 80tees with my C64.)  One simply writes parameters to the register bank VIA MIDI oder UART (or via the internal GUI) like you do it with all known sound expander. Visualisazion is performed in real time.


The point with that system is, that users might demand functions they need and do not have in their synths or MIDI-System.  One thing, I have is e.g. a rythm and ARP controlled reverb and delay, that means: In my system, it is so fast, that it is possible to create early reflections from within MIDI data streams by just copying channels. The MIDI-Mixer I created, runs at 25 MHz. means it can mix and copy 1024x1024 normal MIDI channels within us :-)

The result is, as if you had a very very quick mixing guy in the FOH beeing able to turn the PANPOTs for any sample in a different way. This moves different sources to different directions and creates the spatial sound. (in the original system, it creates the reflections of approaching objects in the simulator). This is only one function which is not available in any of the current synthesizers AFAIK. Most of them are simply not possible with DSPs. They usually process 2 channels only. I am processing 2x64 channels, having e.g. 128 Stereopositions in between any of the routable 8 channels. This creates 8 loudspeaker positions where any of the sounds can move in realtime. Moreover parts of the sound like harmonics can do that to. So it is possible to let one sound crawl from buttom to top of the hall while parts of the harmonics move from left to right.


My synthesis can be configured to be precise enough to come "close to analog" as i also say, and with intelligent MIDI, the reaction of the synth might be as well as analog. My MIDI over S/PDIF protocol offers the possibilty to sent a complete MIDI message with 12 bits precision which are 4095 stages instead of 127 and THIS is fine enough  to be called analog. My K2S (key to sound time) is just one two samples that means as soon as  the midi message starts, the sound runs through the device. FPGA Latency is about 200 CCs for the complete sound synthesis which is about 1us only.


Edited by engineer
Link to comment
Share on other sites

  • 3 weeks later...
  • 8 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...