Jump to content

Speakjet - A PIC ready sound chip?


herr_prof
 Share

Recommended Posts

The adaption was successfull: it's possible to send data to SpeakJet w/o problems, and Audiocommander's application is running fine under MIOS32:

http://svnmios.midibox.org/listing.php?repname=svn.mios32&path=%2Ftrunk%2Fapps%2Fsynthesizers%2Fmidibox_sj_v2%2F&rev=0&sc=0

More details about the (really simple!) circuit tomorrow.

Best Regards, Thorsten.

Link to comment
Share on other sites

  • Replies 345
  • Created
  • Last Reply

Top Posters In This Topic

  • 2 weeks later...

Well, I've got TKs LPC17-based MIDIbox SJ 2 up-and-running, despite serial soldering incompetence.. ;)

I'm thinking of attempting to design a PCB for it, as I hate the messiness of the matrixboard solution. I'd also like to get a display connected, and maybe at some point design some kind of UI for it. That's a longterm project though, as it would require learning to write MIOS application code.

I'll post some photos of my board and some audio clips later, for anyone who's interested.

a|x

Link to comment
Share on other sites

Another example:

It sounds great!! However, I'm starting to think there may be something wrong with either my board (not unlikely) or the converted LCP17 SpeakJet application. The operation of the SpeakJet seems generally quite unstable.

I've noticed the following (some or all issues may be hardware of software errors, or just me misunderstanding how the application is supposed to work):

Sending notes to MIDI channel 5 produces no sounds

Channel 6:

doesn't respond to the Jaw Open and Tongue Position controllers (CC 40 and 41), and produces the same untuned white-noise-like sound for all keys. Oddly, the Speed parameter does seem to have an effect

Channel 7:

seems OK. I can control the Tongue Position and Jaw Open parameters from my MIDI controller. Am I right in thinking the Jaw Open parameter has only two possible values? I don't seem to be able to change Tongue position while an allophone is playing, which surprised me. I'm guessing changed values only effect the next triggered note. I guess the SpeakJet chip wasn't really designed for this kind of realtime tweaking.

Channel 8:

same as 6- untuned noise, responds only to Speed parameter

Channel 9:

as above

Channels 10:

seems to work OK. Responds to Stress and Speed parameters, though Stress (CC 43) values from controller don't seem to be 'held' ie when the parameter is adjusted from my controller, it's only the next note that's effected, any subsequent notes that are triggered revert to a default value unless the controller is tweaked again. Maybe this is expected behaviour? This isn't the case with Channel 1 (all sound codes), which works as expected, though it appears to be with Channel 2 (all allophones) and channel 3 (all SFX), so I'm assuming this is an error.

Channels 11 > 16:

I can trigger different oscillator sounds. They're very 'clicky'. This is probably expected. I haven't mapped the synth parameters on my controller yet, so maybe it's just the default settings that exhibit this behaviour. Also, the note mapping seems weird. Some adjacent notes have the same pitch, and the note-to-pitch mapping just seems generally weird. Is this because of the harmoniser/scaling used for the hand-tracker?

General issues:

pitchbend is glitchy, where implemented. Bending up sometimes was a delayed response, bending down works as expected for smaller bend values, but at extreme settings glitches out, giving a wobbly sound which seems to bear little relation to the phoneme being triggered, and occasional high-pitched pops, sustained noises and circuit-bent type glitches. Again, bending only seems to have an effect on the next note triggered, rather than the currently-playing sound, which is very counterintuitive. Also, it seems to lead to a situation where the pitchbend control is at its centre value, but the next note still plays at the wrong pitch.

I get a lot of stuck notes. This seems to happen when the note is released before the phoneme triggered has finished playing (though I'm not sure it happens in every case).

When initially plugged in, I get 'ready' twice, followed by 'goodbye', which isn't really a problem, but is a bit odd. Again, may be expected behaviour.

I have no way of knowing if any or all of these issues are present in the original PIC-based KII setup, unfortunately, or in Thorsten's LPC17 setup, which I've copied. It would be great to know, if TK and/or audiocommander could confirm on their setups. I don't mean to be super-critical. I'm really impressed with the project. There's a good chance, as I said, that a lot of the above result from me not really understanding how things are supposed to work, and other issues I've identified may simply be the inevitable result of the way the SpeakJet chip was designed. I've also only spent a few hours working with it so far, some I'll probably find workarounds for most of these things, anyway.

I'm considering making some kind of an interface for a future version of the SpeakJet application, with a screen and a couple of controls. I think I'd probably also rearrange things so that it just responded on a single MIDI channel, or perhaps 2 (one for allophones/sounds and one for pitch), and had a way of switching modes, rather than switching modes by changing MIDI channels. I've got to learn a bit of MIOS programming before I can do that though. This might be a good project to get my teeth into, especially since audiocommander's code seems to be quite clear and well-commented.

Cheers,

a|x

a|x

Edited by toneburst
Link to comment
Share on other sites

I can confirm that SpeakJet sometimes behaves very unstable, this is a big limitation of this chip which makes it useless for a real synthesizer.

(see also other opinions in this long thread)

Thats also the reason why I proposed that it would be better to run the speak synth internally (to get rid of SpeakJet), because the LPC17 microcontroller has more than enough processing power to handle this, and it would open new options.

Best Regards, Thorsten.

Link to comment
Share on other sites

indeed, it's just a very tiny (and old!) chip, so one shouldn't expect too much, but I really like your demo!

About the MidiChannels: right, I experimented a bit with this version (but it's nearly four years now, so I don't remember exactly what I changed, but your description sounds like it's all fine).

I guess you've already seen it, but you can find the (outdated v0.1) Midi chart here: http://www.midibox.org/dokuwiki/speakjet_application_software_v_0.1#midi-implementation-chart. You can also take a look in the IIC_SpeakJetMidiDefines.h file and adapt these to your needs:

// ********* SPEAKJET MIDI ASSIGNMENTS * //

// MIDI ASSIGNMENTS

// optimized for KORG microKONTROL

//

// just assign the ControlChange or Channel Numbers to the functions

// eg: CC 107 on CH 1 controls the SPEED of the SpeakJet Allophones (SJCC_SPEED)

// eg: Notes  on CH 2 trigger allophones (all phonemes) (SJCH_ALLOPHONES)

//

// some midi-controls are hardcoded (can't be assigned here)

// channel control messages like PITCH-WHEEL, AFTERTOUCH or PANIC and

// system realtime messages like START/STOP/CONTINUE



// ********** CHANNELS **********

// channel assignments for NOTE_ONs 0x90 and NOTE_OFFs 0x80 functions

// MIDI CHANNELS for NOTE_ONs

#define SJCH_SOUNDCODES			1	//	MSA Soundcodes (Allophones + FX)						default 1

#define SJCH_ALLOPHONES			2	//	MSA Soundcodes (Allophones)								default 2

#define SJCH_FX				3	//  MSA Soundcodes (FX)										default 3

#define SJCH_PITCH			4	//	MSA Pitch (Note2Freq) only! No sound output!			default 4

#define SJCH_VOWELS_DIPHTONGS		5	//  MSA Pitched Vowels incl. Diphtongs 						default 5

#define SJCH_CONSONANTS			6	//	MSA Pitched Consonants  (depending on jaw & tongue)*	default 6

#define SJCH_VOWELS_CONSONANTS		7	//	MSA Pitched Vowels & Consonants (mixed)					default 7

#define SJCH_CONSONANTS_OPEN		8	//  MSA Pitched Consonants produced by mouth opening		default 8

#define SJCH_CONSONANTS_CLOSE		9	//	MSA Pitched Consonants produced by mouth closing		default 9

#define SJCH_PERCUSSIVE			10	//	MSA Soundcodes (Percussive Consonants)					default 10

#define SJCH_OSC1			11	//	OSC 1 (Note2Freq, monophon)								default 11

#define SJCH_OSC2			12	//	OSC 2 (Note2Freq, monophon)								default 12

#define SJCH_OSC3			13	//	OSC 3 (Note2Freq, monophon)								default 13

#define SJCH_OSC4			14	//	OSC 4 (Note2Freq, monophon)								default 14

#define SJCH_OSC5			15	//	OSC 5 (Note2Freq, monophon)								default 15

#define SJCH_OSC_SYN			16	//	OSCs 1 to 5 (Note2Freq, harmonic polymode)				default 16 

And don't forget about the main.h settings, the most important here is the BUILDMODE:
// general defines

#define FALSE				0

#define TRUE				1


#define BUILDMODE_DEFAULT		0

#define BUILDMODE_KTWO_BLACK		1

#define BUILDMODE_KTWO_ACRYL		2

#define BUILDMODE_KTWO_CROSS		3

#define BUILDMODE_KTWO_TESTPCB		4


// select buildmode (predefined settings)

#define BUILDMODE					BUILDMODE_KTWO_CROSS	//BUILDMODE_DEFAULT

so for example, with buildmode "BUILDMODE_DEFAULT" you get:
#if BUILDMODE == BUILDMODE_DEFAULT

	#define KII_AIN_ENABLED				0	// if AIN (sensorMatrix) enabled

	#define KII_LCD_ENABLED				1	// if LCD is attached

	#define KII_SYNC_ENABLED			1	// if SYNC features (MASTER/SLAVE) are enabled

	#define KII_WELCOME_ENABLED			1	// enunciates welcome message on startup

	#define KIII_MODE				0	// for use with Mac OS kIII software only.

	#define MIDI_MERGER_MODE			MIOS_MIDI_MERGER_DISABLED	// if MIDI-IN should be forwarded

	#define KII_LCD_TYPE				28	// currently supported: 216 & 28 <default>

	#define ACSYNC_GARBAGE_SERVICE			0	// if a reset should be done each ACSYNC_GARBAGE_SERVICE_CTR

	#define ACSYNC_GARBAGE_SERVICE_CTR		23	// each x bars a reset is done, default: 23 (1..23)

	#define ACSYNC_GARBAGE_SERVICE_HARDRESET	0	// if a hard- or soft-reset of the SJ is done, default: 1

	#define SCALE_DEFAULT				20				// 0 for not harmonized, default: 20 (spanish)

	#define BASE_DEFAULT				MIDI_NOTE_H	// MIDI_NOTE_x, default: MIDI_NOTE_H

#endif /* BUILDMODE_DEFAULT */

of course, you can change these settings as well, or add a new buildmode depending on the needs of your device (that was the idea behind this mode, so that it can be easily built for one device by changing one #define only.

...and I have to admit, I haven't looked at the port, so files and settings might be somewhere else...

Cheers,

ac

Link to comment
Share on other sites

Thanks for getting back to me, TK.

I can confirm that SpeakJet sometimes behaves very unstable, this is a big limitation of this chip which makes it useless for a real synthesizer.

(see also other opinions in this long thread)

I haven't read through all 16 pages of the thread, I must admit. I was inspired by the demos on the project page most, I think, and I've always had a fascination with speech-synthesis, so I'm sure I'd still have wanted to build it, anyway. I don't tend to mixdown or perform from synths running live so much, these days, so the instability isn't such a massive issue. I'll record a load of stuff in Live, and chop it up into useful segments. I was just interested to know if anyone else has noticed similar behaviour.

Thats also the reason why I proposed that it would be better to run the speak synth internally (to get rid of SpeakJet), because the LPC17 microcontroller has more than enough processing power to handle this, and it would open new options.

Your comment does make a lot more sense, now I've heard the chip in action. I'm really intrigued with the software-based approach, too, which would definitely open up more possibilities, and potentially much better audio quality, with a decent DAC. I imagine there's probably example code out there for speech-synthesis applications like this.

a|x

Edited by toneburst
Link to comment
Share on other sites

Hi ac, thanks very much for your rapid reply!

indeed, it's just a very tiny (and old!) chip, so one shouldn't expect too much

:) Yes, I think I was probably being a bit unrealistic in my expectations. Funnily enough, I've just ordered a slightly more recent chip, the Soundgin, which is supposed to do some of the same things as the SpeakJet, but may be a bit more stable. Not quite sure how I'm going to bring that under MIDIbox control, but there's a guy who's released an Arduino project to control it, which I might be able to borrow some code from.

but I really like your demo!

Thanks! It was a very simple setup.

About the MidiChannels: right, I experimented a bit with this version (but it's nearly four years now, so I don't remember exactly what I changed, but your description sounds like it's all fine).

Ah, OK. I thought that might be the case. I thought maybe it was setup for a very specific use-case, which maybe explained why I was getting some odd results.

I guess you've already seen it, but you can find the (outdated v0.1) Midi chart here: http://www.midibox.org/dokuwiki/speakjet_application_software_v_0.1#midi-implementation-chart.

Yep, found that, thanks. I started to produce a template for me Novation XStation, but only got as far as mapping the Jaw Open, Tongue Position, Speed and Master Volume parameters.

You can also take a look in the IIC_SpeakJetMidiDefines.h file and adapt these to your needs:

// ********* SPEAKJET MIDI ASSIGNMENTS * //

// MIDI ASSIGNMENTS

// optimized for KORG microKONTROL

//

// just assign the ControlChange or Channel Numbers to the functions

// eg: CC 107 on CH 1 controls the SPEED of the SpeakJet Allophones (SJCC_SPEED)

// eg: Notes  on CH 2 trigger allophones (all phonemes) (SJCH_ALLOPHONES)

//

// some midi-controls are hardcoded (can't be assigned here)

// channel control messages like PITCH-WHEEL, AFTERTOUCH or PANIC and

// system realtime messages like START/STOP/CONTINUE



// ********** CHANNELS **********

// channel assignments for NOTE_ONs 0x90 and NOTE_OFFs 0x80 functions

// MIDI CHANNELS for NOTE_ONs

#define SJCH_SOUNDCODES			1	//	MSA Soundcodes (Allophones + FX)						default 1

#define SJCH_ALLOPHONES			2	//	MSA Soundcodes (Allophones)								default 2

#define SJCH_FX				3	//  MSA Soundcodes (FX)										default 3

#define SJCH_PITCH			4	//	MSA Pitch (Note2Freq) only! No sound output!			default 4

#define SJCH_VOWELS_DIPHTONGS		5	//  MSA Pitched Vowels incl. Diphtongs 						default 5

#define SJCH_CONSONANTS			6	//	MSA Pitched Consonants  (depending on jaw & tongue)*	default 6

#define SJCH_VOWELS_CONSONANTS		7	//	MSA Pitched Vowels & Consonants (mixed)					default 7

#define SJCH_CONSONANTS_OPEN		8	//  MSA Pitched Consonants produced by mouth opening		default 8

#define SJCH_CONSONANTS_CLOSE		9	//	MSA Pitched Consonants produced by mouth closing		default 9

#define SJCH_PERCUSSIVE			10	//	MSA Soundcodes (Percussive Consonants)					default 10

#define SJCH_OSC1			11	//	OSC 1 (Note2Freq, monophon)								default 11

#define SJCH_OSC2			12	//	OSC 2 (Note2Freq, monophon)								default 12

#define SJCH_OSC3			13	//	OSC 3 (Note2Freq, monophon)								default 13

#define SJCH_OSC4			14	//	OSC 4 (Note2Freq, monophon)								default 14

#define SJCH_OSC5			15	//	OSC 5 (Note2Freq, monophon)								default 15

#define SJCH_OSC_SYN			16	//	OSCs 1 to 5 (Note2Freq, harmonic polymode)				default 16 

And don't forget about the main.h settings, the most important here is the BUILDMODE:
// general defines

#define FALSE				0

#define TRUE				1


#define BUILDMODE_DEFAULT		0

#define BUILDMODE_KTWO_BLACK		1

#define BUILDMODE_KTWO_ACRYL		2

#define BUILDMODE_KTWO_CROSS		3

#define BUILDMODE_KTWO_TESTPCB		4


// select buildmode (predefined settings)

#define BUILDMODE					BUILDMODE_KTWO_CROSS	//BUILDMODE_DEFAULT

so for example, with buildmode "BUILDMODE_DEFAULT" you get:
#if BUILDMODE == BUILDMODE_DEFAULT

	#define KII_AIN_ENABLED				0	// if AIN (sensorMatrix) enabled

	#define KII_LCD_ENABLED				1	// if LCD is attached

	#define KII_SYNC_ENABLED			1	// if SYNC features (MASTER/SLAVE) are enabled

	#define KII_WELCOME_ENABLED			1	// enunciates welcome message on startup

	#define KIII_MODE				0	// for use with Mac OS kIII software only.

	#define MIDI_MERGER_MODE			MIOS_MIDI_MERGER_DISABLED	// if MIDI-IN should be forwarded

	#define KII_LCD_TYPE				28	// currently supported: 216 & 28 <default>

	#define ACSYNC_GARBAGE_SERVICE			0	// if a reset should be done each ACSYNC_GARBAGE_SERVICE_CTR

	#define ACSYNC_GARBAGE_SERVICE_CTR		23	// each x bars a reset is done, default: 23 (1..23)

	#define ACSYNC_GARBAGE_SERVICE_HARDRESET	0	// if a hard- or soft-reset of the SJ is done, default: 1

	#define SCALE_DEFAULT				20				// 0 for not harmonized, default: 20 (spanish)

	#define BASE_DEFAULT				MIDI_NOTE_H	// MIDI_NOTE_x, default: MIDI_NOTE_H

#endif /* BUILDMODE_DEFAULT */

of course, you can change these settings as well, or add a new buildmode depending on the needs of your device (that was the idea behind this mode, so that it can be easily built for one device by changing one #define only.

...and I have to admit, I haven't looked at the port, so files and settings might be somewhere else...

Thanks very much for the info, ac. that's really helpful.

Yeah, I don't know what settings TK used either. I can have a look though, I think he sent me the updates MIOS32 project. I'll have a go at tweaking the various vars and see what happens. The Scale default variable there being set to 20 might have something to do with the pitch weirdness I was experiencing, perhaps?

a|x

Edited by toneburst
Link to comment
Share on other sites

Btw.: there is a hidden feature in the MIOS32 implementation:

inside the MIOS terminal, type "!" followed by any characters, they will be directly sent to SpeakJet

This could be a helpful for finding nice phrases before they are hardcoded into the application (the function is located in terminal.c)

See the SpeakJet datasheet for available codes.

I remember that there was a dictionary for SpeakJet codes somewhere in the web, you could integrate it into the app as well

Best Regards, Thorsten.

Link to comment
Share on other sites

Incidentally, ac,

could you possibly confirm that my observations about pitchbend and other parameter tweaks only applying to the next triggered note represent the expected behaviour?

a|x

IIRC yes, that is because of the way the SpeakJet works - you can send some control messages to it and then it'll react. I don't think it was possible to play a vowel and then change the pitch of that playing vowel. In contrast to changing an OSC that has been switched on, so there are much more control possibilities. Allophones are kind of pre-calculated OSC-Settings with specific curves, so (again IIRC) you can't alter them that much in realtime. However, you can alter almost anything if you know it in advance. But I guess the SJ has been produced for enunciation with a dictionary (there's a companion chip with an english dictionary btw), thus all parameters would be known in advance. And besides, there's just this 64-byte input buffer which is really (really) small and may overflow quite easily, esp. if you send loads of controller values coming from a mechanical slider; just one slide and you'll produce lots of bytes in a very short time. I tried some tricks to work with that application side (but I'm no pro either and just an autodidact, so....)

It would probably be good to take a look in the datasheet for a refresher of the theoretic background of the chip: http://www.magnevation.com/pdfs/speakjetusermanual.pdf

To check out the "raw" capabilities of the SpeakJet chip and store phrases in the EEPROM, I found the Phrase-a-lator software best. Unfortunately it only works from Windows and you need the chip hooked up via RS232 (but there's also a user manual that helps to see the possibilities) http://www.magnevation.com/software.htm

I think it may be challenging and tough to develop a custom speech/allophone software. There are some resources to check, like:

- Flite: http://www.speech.cs.cmu.edu/flite/ (small C speech engine)

- Language Construction Kit: http://www.zompist.com/kit.html (theoretical background on phonological issues when creating a language

- Festival (Free Speech Generation) & FestVox

- Cairo (Java) http://www.speechforge.org/projects/cairo/

...but I found that high quality sound generators are all closed source and good algorithms are valuable and thus hard to find (if not even unavailable)

---

About the harmonizer: IIRC you should be able to change it via MIDI (so no need to recompile). It should use those new values (can be looked up in "ACMidiProtocol.h") :

#define MIDI_CC_HARMONY_BASE					80	// 0x50, 0-11, see MIDI_NOTE_x

#define MIDI_CC_HARMONY_SCALE					81	// 0x51, 0:none, 1-127:scaleNum

so you could try sending CC81 with a value of 0, then there's no scale (Global Channel should be 0)

(if it doesn't work you can test CC38 from v0.1, but I'm pretty sure it's now 81 as I overworked that to play along well with all my harmonizer based projects).

Best,

Michael

Link to comment
Share on other sites

Thanks very much for all the info, Michael, that's all really useful stuff.

I think I'd still like to attempt a cut-down version of your SpeakJet application, if you have no objection. I've never tried MIOS programming, or any major projects in C, so who knows how I'll get on. I think for the moment I'd like to focus on hardware control of the SpeakJet, and I also have a Soundgin chip on the way from the States, so I'd also like to MIDIbox that, at some point.

Currently, I'm thinking of something along these lines for the SpeakJet, building on your work, Michael, but taking things in a simplified direction:

Phoneme and maybe SFX reproduction only initially (equivalent to Channels 1 or 2 of the existing app)

I'm thinking there are two possible solutions to playing sounds melodically

1. The current setup, with pitch and timbre on discrete MIDI channels. Note velocity mapped to 'Stress' parameter

2. MIDI note sets pitch, note velocity sets which phoneme/sound code is played, unpitched allophones play at fixed pitch across all keys. This approach comes from a Kontakt bank I created a while back from a set of phoneme samples from a vintage SP0256 speech IC. Stress mapped to a MIDI CC.

In either case, Speed parameter mapped to MIDI controller, maybe modwheel.

The software-only solution is intriguing, too, but you're right, Michael, I suspect it's going to be hard to find any example code to serve as a starting point. A lot of research that I've seen seems very much focussed on realistic, not-necessarily realtime simulation of spoken speech patterns, and rules for text-speech reproduction. I'm not really interested in that, I just want to be able to produce phonemes under full MIDI control. Seems like a much bigger task than controlling a chip with phoneme synthesis already builtin, but I take your point, Thorsten; a completely software solution would allow more flexibility and potentially much better audio quality. It's something to bear in mind though, and maybe I'll get to that one day.

a|x

Edited by toneburst
Link to comment
Share on other sites

  • 5 weeks later...

Hi Michael,

I apologise for such a long delay in replying to your post in February; life is complicated sometimes.

First of all, thank you very much for your helpful reply and continued support in this thread.

I am on the threshold of connecting it all together and testing software and so forth, and hope to have it working 'soon' and to fascinate a speech and language therapist with it :)

With regard to the Sharp IR sensors, I have done a little more work with the Arduino for a friend's installation and found the following link useful:

http://www.electrotap.com/blog/503

although the thing that made a BIG difference to the noise levels was putting 10K pull-down resistors on the analogue inputs of the Arduino, so I may try that on the MB core's analogue inputs.

Regards Chris.

Link to comment
Share on other sites

Hi Chrisbob,

there are a couple of threads on this forum, where we discussed the implementation of the Sharp IR sensors in great detail. As you already found out, they don't deliver the full 0-5V range, so a scaling is required. Personally I found software scaling much easier than hardware scaling (requiring an amplification circuit and a higher voltage) and thus implemented a good software scaling that works well. Moreover, the scaling can be calibrated which is a lot nicer than hardware scaling.

Please use the search function to find the relevant threads (good keywords are: IR, distance, sensor, sharp, d-beam, ...)

Best,

Michael

Link to comment
Share on other sites

Hi a|x

Maybe better late than never. I have a spare MAX EEPE chip if you're still interested.

Regards, Chris.

Hi,

just bought a SpeakJet chip, and am looking forward to MIDIBoxing it!

This is a big ask, but could anyone possibly sell me one of the PIC16F88 chips with the SpeakJet firmware already burned onto it? I know there was talk about SmashTV stocking them, but they don't seem to be on his site. Also, I'm having problems tracking down the MAX 232 EEPE. Anyone know of a UK supplier for this item?

a|x

Link to comment
Share on other sites

Hi Chris,

thanks for getting back to me. I managed to get the MAX chip from DigiKey in the US in the end, but don't currently need it, as I went with TKs LPC17-based hardware solution, which is much simpler than ACs documented PIC-based setup. Seems to work pretty well, too!

Thanks again for the offer though,

a|x

Link to comment
Share on other sites

Darn, I was thinking I could swap it for a beer, in town as well. Glad to hear the LPC17 answers the door - I'd better check it out... maybe my next project.

Hi Chris,

thanks for getting back to me. I managed to get the MAX chip from DigiKey in the US in the end, but don't currently need it, as I went with TKs LPC17-based hardware solution, which is much simpler than ACs documented PIC-based setup. Seems to work pretty well, too!

Thanks again for the offer though,

a|x

Link to comment
Share on other sites

Thanks Michael, I have started following these up. Your recent comments on the sensor matrix (and finally looking in the readme file for the MIOS app have been very helpful, particularly in thinking through the gesture envelope for which the hand controller is designed - and alternatives)

I have moved my copy of the instrument on and things are looking good:



  • The SJ board accepts serial control from Phraselator - as far as I can tell I have loaded the PIC 16F... app correctly into the PIC chip
  • MIOS studio correctly identifies the core MIDI (one of Smash TV's)
  • No funny burning smells
  • The SJ only says "ready" once on power up when the core module is connected via IIC
  • Sensors present varying voltages at the AIN connections (0-3)


    Puzzles:


    • No sounds coming out yet when I wave my hand in the sensor matrix
    • The LCD shows...

    B -SPAN

    O^E^(and a cursor rectangle)

    ...after the initial MIOS welcome screen and credits. I've shown ^ (caret) signs although on the display they are more assymetric than that. I suspect that the SPAN is the default harmony mode set and B indicates international.

    [*]The 5V regulator is hot (is that the thirsty sensors?)

    [*]No response to MIDI notes sent from MIOS studio's keyboard (although my sound card definitely shows MIDI activity)

    I'm pretty sure that the LCD message is not right

    As far as I can tell from the documentation, SJ does not need to have stuff loaded into EEPROM

    Should I be using the .syx file called "release" for the core?

    Anyway, early days in a fascinating project - I'm learning a lot. Also very pleased to learn that you named it after a famous automata builder. I have some interest in that area too - my main project in this area is here

    www.boutiquerobots.com

    Regards, Chris.

    Hi Chrisbob,

    there are a couple of threads on this forum, where we discussed the implementation of the Sharp IR sensors in great detail. As you already found out, they don't deliver the full 0-5V range, so a scaling is required. Personally I found software scaling much easier than hardware scaling (requiring an amplification circuit and a higher voltage) and thus implemented a good software scaling that works well. Moreover, the scaling can be calibrated which is a lot nicer than hardware scaling.

    Please use the search function to find the relevant threads (good keywords are: IR, distance, sensor, sharp, d-beam, ...)

    Best,

    Michael

Link to comment
Share on other sites

A follow up from the previous message:

I focused on the LCD message. I had loaded the v0.2 applications to the PIC16 (with PBrenner) and the MIOS core via MIOS studio. After studying the v0.1 page, I guessed that the ^ (caret - like) characters were the sawtooth settings, associated with the O and E labels. So I loaded up v0.1 and straight away, the LCD display is like the photo on the v0.1 app page, but when I load v0.2 then the LCD shows the characters as described in the previous post; B -SPAN etc.

Is that how it's supposed to be?

(Further detail here: the Smash TV PCB is core R4D)

A note on a bug-like thing - the Icon files in the k11 applications will not extract: I have to deselect them when extracting with WinZip.

Chris.

Link to comment
Share on other sites

Hi Chris,

The LCD shows...

B -SPAN

O^E^(and a cursor rectangle)

...after the initial MIOS welcome screen and credits. I've shown ^ (caret) signs although on the display they are more assymetric than that. I suspect that the SPAN is the default harmony mode set and B indicates international.

...

I'm pretty sure that the LCD message is not right

v0.1 uses a 16x2 LCD and in v0.2 I'm using a 8x2 LCD.

As 8x2 LCDs are quite rare, I suspect, you are showing the 8x2 output on a 16x2. Which works, but it might look strange.

The O means "Oscillator", the E "Envelope",

B is the base note and SPAN is the 4-char code for the current scale (SPANISH)

...but this is all in the docs: http://www.midibox.org/dokuwiki/doku.php?id=speakjet_application_software_v_0.1#lcd-values

Also, you can take a look in the main.c file to see the LCD output function: DISPLAY_Init() and DISPLAY_Tick(). These are really clean to read and you'll see the various Buildmodes (that means you can change the settings for the LCD you have in main.h: KII_LCD_TYPE... but I think I mentioned this a couple of times).

Note that I have two devices and one doesn't use any LCD at all, that why it was no priority for me to show informations. But you can change this quite easily.

No sounds coming out yet when I wave my hand in the sensor matrix

...

Should I be using the .syx file called "release" for the core?

I compiled this nearly four years ago, so I really can't remember every setting in the various precompiled syx files for the various device configurations I have plus the default userland settings that are around.

I guess you are referring to the 0.2 "release"; this is most likely built with the BUILDMODE_DEFAULT setting.

If you take a look in main.h, you'll find all the settings:


	#define KII_AIN_ENABLED			0	// if AIN (sensorMatrix) enabled

	#define KII_LCD_ENABLED			1	// if LCD is attached

	#define KII_SYNC_ENABLED		1	// if SYNC features (MASTER/SLAVE) are enabled

	#define KII_WELCOME_ENABLED		1	// enunciates welcome message on startup

	#define KIII_MODE				0	// for use with Mac OS kIII software only.

	#define MIDI_MERGER_MODE		MIOS_MIDI_MERGER_DISABLED	// if MIDI-IN should be forwarded

	#define KII_LCD_TYPE			28	// currently supported: 216 & 28 <default>

	#define ACSYNC_GARBAGE_SERVICE				0	// if a reset should be done each ACSYNC_GARBAGE_SERVICE_CTR

	#define ACSYNC_GARBAGE_SERVICE_CTR			23	// each x bars a reset is done, default: 23 (1..23)

	#define ACSYNC_GARBAGE_SERVICE_HARDRESET	0	// if a hard- or soft-reset of the SJ is done, default: 1

	#define SCALE_DEFAULT			20				// 0 for not harmonized, default: 20 (spanish)

	#define BASE_DEFAULT			MIDI_NOTE_H		// MIDI_NOTE_x, default: MIDI_NOTE_H

And there's also your answer, why nothing is happening if you waive your hands: the KII_AIN_ENABLED is set to 0 by default.

Again (as mentioned "you can change these settings as well, or add a new buildmode depending on the needs of your device (that was the idea behind this mode, so that it can be easily built for one device by changing one #define only."

No response to MIDI notes sent from MIOS studio's keyboard (although my sound card definitely shows MIDI activity)

I can't say much about this as it's a very vague statement. Possible failures could be usage (right MIDI-Channel, volume, jaw/tonge-setting?), hardware (soldering, connections?), wiring or software (Midi I/O config, driver).

btw:

it's normal for the regulator to get quite warm. Because you don't have "funny burning smells" :D , I guess it's okay. But if it gets too hot, there might be a severe problem (usually related to wiring).

Regards,

Michael

Link to comment
Share on other sites

Hi Michael,

Thanks for your patience and support. Clearly the learning curve continues: command line compiling is new territory for me.

I have installed the tool chain recommended for MidiBox, and also note that PERL is required for KII as well - I have installed Strawberry perl. It all nearly compiles...

The command line dialog threw up a bunch of stuff about the make process before going into an endless repetition of:

error: different addresses for absolute overlay sections ".registers"

So I looked for "registers" in the various files in the application and found that the Perl scripts reassign some of the registers to avoid conflicts. Looking in the .asm files generated by this make, it seems that some parts of ACSynchronizer do not have their registers reassigned, specifically, the following:

FSR1L

POSTDEC1

PREINC1

Maybe I'll need to modify one of the some file to do this, though I have not figured out which one. It seems from previous posts that other people have compiled off these files, so maybe I'm barking up the wrong tree.

Regards, Chris.

Link to comment
Share on other sites

No worries, I'm sure it's a foolish little error on my part. At the risk of documenting every step of debugging, I'll just post things I checked.

fixasm.pl has lines 29-31 which are explicitly about eliminating this error:


  # to prevent "Error - section '.registers' can not fit the absolute section. Section '.registers' start = 0x00000000, length=0x00000003"

   # for MIOS it is also required to set the start address to 0x010

   s/^\.registers\tudata_ovr\t0x0000/.registers\tudata_ovr\t0x010/;
I'm not sure what to make of that. Presumably something's bypassing this. I tried changing references to ACSYNC in main.c to ACSyncronizer to look like the other references to files in the package (yes it is pure voodoo on my part) which stopped the infinite loop effect, but threw up a lot of error 112 - I suspect that is not the way to go. Remembered to ctrl-break, and here's the opening lines on the command line when I hit 'make' with unmodified files

C:\kII_026>make

gplink -s kII.lkr -m -o kII.hex _output/mios_wrapper.o _output/IIC_SpeakJet.o _o

utput/ACSyncronizer.o _output/ACHandTracker.o _output/main.o _output/ACHarmonize

r.o _output/ACMidi.o _output/pic18f452.o _output/ACToolbox.o

error: different addresses for absolute overlay sections ".registers"

...repeat last line ad infinitum

So a bunch of unlinked .o files, though not for mios_wrapper or pic18f452 (which both have four year old dates in the output directory). Some of it is starting to make sense. A bit :)

Regards, Chris.

Edited by chrisbob12
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share


×
×
  • Create New...