Jump to content

Speakjet - A PIC ready sound chip?


herr_prof
 Share

Recommended Posts

  • Replies 345
  • Created
  • Last Reply

Top Posters In This Topic

Hi a|x,

Thank you for asking. So far:

v0.1 compiled and loaded (.hex and .syx) into PIC 18F and 16... this morning. It kinda works but no sound. I can get the values characters on the LCD to follow MIOS studio and indeed Max patches for basic MIDI note numbers and the LCD tracks the note names, and the harmonizer listen function on cc37 changes betweeen * and - :I haven't tried any other cc values yet. So I'm reasonably confident about the Core PCB soldering and the code compiling. I'll probably build a Max interface to replicate the MIDI scheme for more testing.

The Speakjet board works fine under Phraselator, as far as I can tell. I haven't tried anything too sophisticated, like sentences, but have put in new words and so forth, with good results. This board is powered by the IIC connection, and I have checked the connections pin to pin on my IIC connection. It always says 'ready' when powered up, occasionally with an extra bonus sound.

So, my plan is to get sound out with v0.1, then figure out how to get v0.2 to compile.

Regards, Chris.

Link to comment
Share on other sites

Cool, Chris. Sounds like you're getting there. I'm glad I didn't go for the original setup- it took me long enough to put together the massively simpler LPC17-based circuit Thorsten suggested, so I don't think I'd ever have managed the much more complicated circuit detailed on uCapps.de, despite audiocommander's excellent documentation. My matrixboard skills are pretty minimal, unfortunately.

a|x

Link to comment
Share on other sites

Don't know how, don't know why, but v0.2 now compiles. Perhaps because I closed the Notepad++? Anyway, it all works and I am very pleased. :w00t:

Modifying the code to use a 2x16 LCD and read the AINs for the sensor matrix were both trivially simple due to clear, readable code and good commenting.

It takes me back to Spitalfields four years ago. All credit to Audiocommander's excellent design, coding and documentation. Not to mention the midibox infrastructure and the prodigious amount of work that's gone into the whole body of knowledge.

Naturally, there's still some mopping up:



  • Sort out the configuration of the sensors to get the tongue movement associated with the correct sensor postions
  • Make nice frames and cases to match others
  • Investigate additonal options for routing the controller to send cc or NRPN signals out to a synth
  • Investigate the MIDI syncing and note following

Time for a celebratory beer.

And thank you very much to Audiocommander for sharing your great project.

Edited by chrisbob12
Link to comment
Share on other sites

Thank you!

I enabled midi merge by changing disable to enable in main.c , which seems to work and sends midi inputs to midi out, but not data from the sensor matrix: you can see sensor matrix changes happening in MIOS studio's midi input pane, but it's not transmitted to the output pane - however, MIOS studio keyboard values are transmitted to the output pane.

Regards, Chris.

Heya, nice to hear it's working :thumbsup:

IIRC there's a MIDI Merger setting somewhere that activates MIDI forwarding to the output.

Best,

ac

Link to comment
Share on other sites

Hi Audiocommander,

I'd appreciate some advice on configuring the sensor matrix.

Question 1

When I built the sensor matrix, the order in which I connected the sensors was arbitrary, because I was using the sensors to drive a Max patch. Now I'm using it to drive the KII application, they need to map in the correct order to the AINs. I am using AINs 0-3 (J5a on SmashTVs R4 board) which certainly gets results, so I assume that's the correct set of inputs.

One of your earlier posts shows the following diagram.


    BEAM1         BEAM2

      \    +   +     /

       \  /     \  /

        \/       \/

        /\       /\

       /  \     /  \

      /    +   +    \

    BEAM3         BEAM4

Do the beam numbers map beam 1->AIN0, beam 2->AIN1 ... beam 4->AIN3 ?

I have looked through the code, but it is not clear to me.

Question 2

What's your intention with how the beams work? Should the top two beams converge above, below or at the same level of the lower sensors?

Regards, Chris.

Link to comment
Share on other sites

Hi Chris,

Question 1

Do the beam numbers map beam 1->AIN0, beam 2->AIN1 ... beam 4->AIN3 ?

I have looked through the code, but it is not clear to me.

Question 2

What's your intention with how the beams work? Should the top two beams converge above, below or at the same level of the lower sensors?

The beams are not supposed to exacty face each other, that's why I pointed them to different directions (otherwise the bottom distance sensor might receive the beam from the top sender).

For all other questions, I suppose you might have overlooked ACHandTracker.h?


/*

 *  ACHandTracker.h


(...)


 *  This sensor matrix fetches the readings of 4 GP2D-Sharp GP2D120X distance sensors

 *  to calculate additional vars. The matrix is set up by 2 top and 2 bottom sensors

 *  If you put your hand in this space to simulate a hand-talking puppet following

 *  infos are accessible

 *  TL          top left     sensor 0 value                                     0..20 cm

 *  TR          top right    sensor 1 value                                     0..20 cm

 *  BL          bottom left  sensor 2 value                                     0..20 cm

 *  BR          bottom right sensor 3 value                                     0..20 cm

 *  FINGER      position of fingers (height top)                                0..20 cm

 *  THUMB       position of thumb (height bottom)                               0..20 cm

 *  OPEN        opening distance, space between fingers & thumb                 0..14 cm

 *  HEIGHT      heigt of whole hand                                             15..95 Hz

 *  ROLL        angle of hand, roll left (0..1) or right (3..4), default 2      0..5  (2)

 *  //BEND        not yet available --- (0..15, default 5)                      0..15 (5)

 *

 *  If a hand-opening is detected, phonemes are triggered based on the phonetic topic

 *  tables of the IIC_SpeakJet classes.

 *

 *  To make it less abstract:

 *  A Cocoa (Objective-C) demonstration program for Mac OS X (maybe it works with GNU-Step

 *  on Linux too, who know ;) This program emulates the sensors and calculates the virtual

 *  open beam. All number-chrunching is visualized there. Frankly, I think the gesture

 *  implementation so far is quite easy to grasp if you see what's going on at this demo.

 * /


(...)



// do not change this:

#define AIN_MUXED                   1       // 32 AINs via AIN-module

#define AIN_UNMUXED                 0       // 8 AINs via J5, default setting

// ...instead select here the MUX-Type:

#define AIN_MUX                     AIN_UNMUXED


#define AIN_NUM                     4       // number of Analog Inputs, default: 4

#define SENSOR_NUM                  AIN_NUM


(...)


// sensor matrix hardware

#define HT_TL                       0       // AIN pin num of top left sensor

#define HT_TR                       1       // AIN pin num of top right sensor

#define HT_BL                       2       // AIN pin num of bottom left sensor

#define HT_BR                       3       // AIN pin num of bottom right sensor


// sensor config                            // use 16bit division math; best linearized results

#define USE_COMPLEX_LINEARIZING     1       // if disabled, uses lookup table (not recommended, only GP2D120X supported!)

#define GP2D120X                    0x30    //  4..30cm Sharp Distance Sensor

#define GP2D12J0                    0x80    // 10..80cm Sharp Distance Sensor

#define HT_SENSORTYPE               GP2D120X    // which sensor type is used

// note that a sensorType other than GP2D120X has never been tested; 

// it's probably better to leave GP2D120X even when working with a GP2D12J0, 

// because all calculations are optimized for 0..26 cm; as the voltage is relative, this should be no problem...


(...)


#define SENSOR_SLOWDOWN             4       // every nth signal will be processed (per sensor), only if not quantized


// gesture detection

#define GESTURE_SLOWDOWN            4       // every nth update will be processed (unrelevant if SYNC_ENABLED!)

#define TRACKER_THRESHOLD           10      // every nth update will be requested (unrelevant if SYNC_ENABLED!)

#define SMEAR_VOWEL                 126     // used for enunciate, see SJ_MidiDefines for SJCH_xx typedefs


// midi related

#define SENSORIZED_NOTE_VELOCITY    0       // note velocity for sensorized NOTE_ONs; 0 for lastPhoneme (default: 100)

#define HANDTRACKER_CC_OUTPUT       1       // if tracked values should be sent by midi

#define HANDTRACKER_CC_OUTPUT_VERBOSE 0     // if additional values like AIN values should be sent by midi/CC


(...)

I guess this should answer all your questions :ahappy:

If you're interested how I mounted the sensors, you should check out this docu, esp. the second vido with the wooden test-rig and the detailed photos of the sensor mountings in my second machine (clear case) :

http://www.audiocommander.de/blog/?p=71

Cheers,

ac

Link to comment
Share on other sites

  • 5 months later...

Testing, testing...

my K2 works!

I got it working about six months ago, and in the lull of a new year, thought it would be good to share. Hope all the link stuf works right. Anyway, more work to do, but very satisfied to have got this far. Maybe I should post this on the projects page, but it felt like closure for this thread.

If only I could get the video to embed properly

Regards, Chris.

Edited by chrisbob12
Link to comment
Share on other sites

post-5970-0-81467000-1326069203_thumb.jpMany thanks audiocommander! I've done quite few electronic projects over the years, but this was a real challenge (and it takes so long!).

I'm keen to get it cased up. I was going to make something with lots of ply trendy slices (see attachment), but I'm wondering if I should try for something a bit more Luigi Colani (loved his stuff when I was a kid and saw his exhibition in London a few years ago).

Next steps will probably be to check it out with some MIDI running in.

Link to comment
Share on other sites

Hi Sparx, I picked up the sock-puppet meme from audiocommander's web-site: he mentions it as a paradigm for single-handed operation. A white sock gives maximum reflectivity for the IR sensors.

I think we'll work on singing, rather than speech first. How's your speech synth coming along?

Regards, Chris.

Link to comment
Share on other sites

Singing socks, even better!

I'm working on a regular synth, although speech is possible with the chip I'm using, not much was ever done with it, so. But yes, I can report progress, yesterday, hooked up the keyboard and played songs using the MIDIO128 V3 matrix, today wil be making a more permanent interface board for it, then it's back to driving the noise generator part.

I'll send you an email with some pics as this is going OT.

Best regards

S

Link to comment
Share on other sites

  • 1 month later...

Hi Analog Monster,

Is there a diagram for the LPC17 based circuit?

no, there isn't

This project has been created 2007, so there's only a PIC based core module: http://www.ucapps.de/mbhp_core.html

:-)

Though I doubt that anything faster would do any good, as the SpeakJet is quite old and slow as well.

cheers,

Michael

Link to comment
Share on other sites

You don't need the complete board, just buy a LPCXPRESSO from Embedded Artists (for 20 EUR + shipping, delivery within two days), and solder it on a veroboard together with the SpeakJet, the 3.3V voltage regulator circuit and a USB socket (optionally also a MagJack and MIDI IN/OUT)See also http://www.ucapps.de/mbhp/mbhp_core_lpc17.pdf for a "full featured" board, but as mentioned: you don't need the whole circuit to get it running. :)yes, it will also run on a STM32 core thanks to the MIOS32 hardware abstraction layer :)But using the LPCXPRESSO is the best (and cheapest) solution.Best Regards, Thorsten.

This is why I am confused, TK seems to have the project running on LPC17. I have one from a university project I wont be using any more so I would like to use it for this project. I can core with a PIC based core and etch the speakjet board but if there is a simpler way that uses my LPC17 that would be great

Link to comment
Share on other sites

It is running with the LPC core. I can only imagine that there are two groups of users, those still hacking away on the PIC core version and those using the LPC.

I'm interested in this project, but after reading about the instability of the chip? I'm feeling a bit discouraged.

It can't be that bad as there are commercial MIDI synth units using it (quite pricey too)?

http://createdigitalmusic.com/2007/04/getlofi-flame-the-talking-midi-synth-and-a-speech-chip-for-diy-hardware/

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share


×
×
  • Create New...