Jump to content

toneburst

Programmer
  • Posts

    524
  • Joined

  • Last visited

Everything posted by toneburst

  1. OK, seems to be fixed now. I remelted some of the solder on IC51 and IC53, and all now seems well. Must just have been a bad joint on one of the two surface-mount chips. a|x
  2. Lol, thanks for the info, Imp. I do have other things to do, in fact- need to try and work out why audio outs 1 and 2 are horribly distorted :sad: a|x
  3. Hiya, wow, rapid reply. I'll have a look into that. a|x
  4. Well, I thought my SammichFM was working perfectly, but I think I spoke a bit too soon. I seem to be getting a nasty distorted output from Audio Out 1/2. The output of out 2/3 sounds fine. Audio from 1/2 seems to be a distorted version of what's coming out of 1/2. Any tips on tracking down the problem? a|x
  5. Hi, where can I find the perl script to generate random patches that's mentioned on the uCapps.de MIDIboxFM page. Just completed my SammichFM, and badly need to overwrite the underwhelming General-MIDI presets with something more interesting. a|x
  6. toneburst

    sammichFM

    Just completed (bar painting the panel labels) my SammichFM. NIce work Wilba and Nils! a|x
  7. Looking forward to hearing some more audio examples! a|x
  8. Cool, Chris. Sounds like you're getting there. I'm glad I didn't go for the original setup- it took me long enough to put together the massively simpler LPC17-based circuit Thorsten suggested, so I don't think I'd ever have managed the much more complicated circuit detailed on uCapps.de, despite audiocommander's excellent documentation. My matrixboard skills are pretty minimal, unfortunately. a|x
  9. I've passed your suggested venues on to Mark.

    Re carnations- not sure how that would look pinned to the front of a tshirt ;)

    I'm reasonably easy to spot. Tallish, glasses, ginger beard, sunburn. The latter fortunate for ID purposes, but otherwise not so good.

    a|x

  10. Is it all working now, Chris? a|x
  11. Sorry, Chris. Yep, got your message. Mark is free next Friday, too (15th), so let's meet then, if you're up for it.

    I think you have to have made a certain number of comments to be able to send direct messages to other users. Some kind of anti-spam measure, I think. Leaving messages here works for the moment, though.

    Have a good time at the festival!


  12. Fancy joining Mark (sparx) and I for a beer or 3 in Town sometime after 6:00 on Friday (8th)? Not sure of venue at the moment, but it'll probably be somewhere in the Green Park/Bond St. area, as that's where Mark works.

    a|x

  13. Wow, that does sound intriguing! Nice work, TK!! Interested to see where this leads. a|x
  14. Hi Chris, thanks for getting back to me. I managed to get the MAX chip from DigiKey in the US in the end, but don't currently need it, as I went with TKs LPC17-based hardware solution, which is much simpler than ACs documented PIC-based setup. Seems to work pretty well, too! Thanks again for the offer though, a|x
  15. SoundGin dev. board arrived yesterday. Looking forward to giving it a go! Just hoping my newly-bought Belkin USB-Serial adapter plays nicely with Parallels/Windoze XP on my Mac laptop... a|x
  16. toneburst

    sammichFM

    I haz Sammich! Just arrived this morning. Haven't checked contents, but I'm sure it's all here. a|x
  17. This looks interesting. http://simulationcorner.net/index.php?page=sam pretty oldschool/lo-fi, but intriguing. Also, I have gespeak, a Linux frontent for eSpeak installed in Ubuntu. Sounds quite nice. eSpeak is written in C, so may be translatable to LCPXPRESSO/MIOS32, as discussed on this other thread: a|x
  18. Thanks very much for all the info, Michael, that's all really useful stuff. I think I'd still like to attempt a cut-down version of your SpeakJet application, if you have no objection. I've never tried MIOS programming, or any major projects in C, so who knows how I'll get on. I think for the moment I'd like to focus on hardware control of the SpeakJet, and I also have a Soundgin chip on the way from the States, so I'd also like to MIDIbox that, at some point. Currently, I'm thinking of something along these lines for the SpeakJet, building on your work, Michael, but taking things in a simplified direction: Phoneme and maybe SFX reproduction only initially (equivalent to Channels 1 or 2 of the existing app) I'm thinking there are two possible solutions to playing sounds melodically 1. The current setup, with pitch and timbre on discrete MIDI channels. Note velocity mapped to 'Stress' parameter 2. MIDI note sets pitch, note velocity sets which phoneme/sound code is played, unpitched allophones play at fixed pitch across all keys. This approach comes from a Kontakt bank I created a while back from a set of phoneme samples from a vintage SP0256 speech IC. Stress mapped to a MIDI CC. In either case, Speed parameter mapped to MIDI controller, maybe modwheel. The software-only solution is intriguing, too, but you're right, Michael, I suspect it's going to be hard to find any example code to serve as a starting point. A lot of research that I've seen seems very much focussed on realistic, not-necessarily realtime simulation of spoken speech patterns, and rules for text-speech reproduction. I'm not really interested in that, I just want to be able to produce phonemes under full MIDI control. Seems like a much bigger task than controlling a chip with phoneme synthesis already builtin, but I take your point, Thorsten; a completely software solution would allow more flexibility and potentially much better audio quality. It's something to bear in mind though, and maybe I'll get to that one day. a|x
  19. Incidentally, I've just ordered a Soundgin chip and development board from US webshop (stupidly expensive delivery to the UK, but that's another story). I'm hoping to bring that under MIDIbox control at some point in the future. a|x
  20. Incidentally, ac, could you possibly confirm that my observations about pitchbend and other parameter tweaks only applying to the next triggered note represent the expected behaviour? a|x
  21. Hi ac, thanks very much for your rapid reply! :) Yes, I think I was probably being a bit unrealistic in my expectations. Funnily enough, I've just ordered a slightly more recent chip, the Soundgin, which is supposed to do some of the same things as the SpeakJet, but may be a bit more stable. Not quite sure how I'm going to bring that under MIDIbox control, but there's a guy who's released an Arduino project to control it, which I might be able to borrow some code from. Thanks! It was a very simple setup. Ah, OK. I thought that might be the case. I thought maybe it was setup for a very specific use-case, which maybe explained why I was getting some odd results. Yep, found that, thanks. I started to produce a template for me Novation XStation, but only got as far as mapping the Jaw Open, Tongue Position, Speed and Master Volume parameters. Thanks very much for the info, ac. that's really helpful. Yeah, I don't know what settings TK used either. I can have a look though, I think he sent me the updates MIOS32 project. I'll have a go at tweaking the various vars and see what happens. The Scale default variable there being set to 20 might have something to do with the pitch weirdness I was experiencing, perhaps? a|x
  22. Thanks for getting back to me, TK. I haven't read through all 16 pages of the thread, I must admit. I was inspired by the demos on the project page most, I think, and I've always had a fascination with speech-synthesis, so I'm sure I'd still have wanted to build it, anyway. I don't tend to mixdown or perform from synths running live so much, these days, so the instability isn't such a massive issue. I'll record a load of stuff in Live, and chop it up into useful segments. I was just interested to know if anyone else has noticed similar behaviour. Your comment does make a lot more sense, now I've heard the chip in action. I'm really intrigued with the software-based approach, too, which would definitely open up more possibilities, and potentially much better audio quality, with a decent DAC. I imagine there's probably example code out there for speech-synthesis applications like this. a|x
  23. Another example: It sounds great!! However, I'm starting to think there may be something wrong with either my board (not unlikely) or the converted LCP17 SpeakJet application. The operation of the SpeakJet seems generally quite unstable. I've noticed the following (some or all issues may be hardware of software errors, or just me misunderstanding how the application is supposed to work): Sending notes to MIDI channel 5 produces no sounds Channel 6: doesn't respond to the Jaw Open and Tongue Position controllers (CC 40 and 41), and produces the same untuned white-noise-like sound for all keys. Oddly, the Speed parameter does seem to have an effect Channel 7: seems OK. I can control the Tongue Position and Jaw Open parameters from my MIDI controller. Am I right in thinking the Jaw Open parameter has only two possible values? I don't seem to be able to change Tongue position while an allophone is playing, which surprised me. I'm guessing changed values only effect the next triggered note. I guess the SpeakJet chip wasn't really designed for this kind of realtime tweaking. Channel 8: same as 6- untuned noise, responds only to Speed parameter Channel 9: as above Channels 10: seems to work OK. Responds to Stress and Speed parameters, though Stress (CC 43) values from controller don't seem to be 'held' ie when the parameter is adjusted from my controller, it's only the next note that's effected, any subsequent notes that are triggered revert to a default value unless the controller is tweaked again. Maybe this is expected behaviour? This isn't the case with Channel 1 (all sound codes), which works as expected, though it appears to be with Channel 2 (all allophones) and channel 3 (all SFX), so I'm assuming this is an error. Channels 11 > 16: I can trigger different oscillator sounds. They're very 'clicky'. This is probably expected. I haven't mapped the synth parameters on my controller yet, so maybe it's just the default settings that exhibit this behaviour. Also, the note mapping seems weird. Some adjacent notes have the same pitch, and the note-to-pitch mapping just seems generally weird. Is this because of the harmoniser/scaling used for the hand-tracker? General issues: pitchbend is glitchy, where implemented. Bending up sometimes was a delayed response, bending down works as expected for smaller bend values, but at extreme settings glitches out, giving a wobbly sound which seems to bear little relation to the phoneme being triggered, and occasional high-pitched pops, sustained noises and circuit-bent type glitches. Again, bending only seems to have an effect on the next note triggered, rather than the currently-playing sound, which is very counterintuitive. Also, it seems to lead to a situation where the pitchbend control is at its centre value, but the next note still plays at the wrong pitch. I get a lot of stuck notes. This seems to happen when the note is released before the phoneme triggered has finished playing (though I'm not sure it happens in every case). When initially plugged in, I get 'ready' twice, followed by 'goodbye', which isn't really a problem, but is a bit odd. Again, may be expected behaviour. I have no way of knowing if any or all of these issues are present in the original PIC-based KII setup, unfortunately, or in Thorsten's LPC17 setup, which I've copied. It would be great to know, if TK and/or audiocommander could confirm on their setups. I don't mean to be super-critical. I'm really impressed with the project. There's a good chance, as I said, that a lot of the above result from me not really understanding how things are supposed to work, and other issues I've identified may simply be the inevitable result of the way the SpeakJet chip was designed. I've also only spent a few hours working with it so far, some I'll probably find workarounds for most of these things, anyway. I'm considering making some kind of an interface for a future version of the SpeakJet application, with a screen and a couple of controls. I think I'd probably also rearrange things so that it just responded on a single MIDI channel, or perhaps 2 (one for allophones/sounds and one for pitch), and had a way of switching modes, rather than switching modes by changing MIDI channels. I've got to learn a bit of MIOS programming before I can do that though. This might be a good project to get my teeth into, especially since audiocommander's code seems to be quite clear and well-commented. Cheers, a|x a|x
  24. Yeah, talking is.. difficult, I think. You have to string words together from individual phonemes. I haven't tried it yet, but it should be possible, in theory. Maybe Audiocommander has tried it. You can also save words and phrases in the chips internal memory, but you have to do it with a Windows-only application, and then connect to the SpeakJet via a serial connection. I haven't setup my board with a serial connection, so it can only be controlled via MIDI, through the USB port. I'm sure it would be possible to add serial control, but I wanted to keep everything as compact as possible, so I could put the board in a small enclosure. a|x
  25. I little demo, SpeakJet controlled by Ableton Live, with some tweaking of the 'Stress' and 'Speed' parameters from a MIDI controller. I've added a touch of note randomisation, and I'm also transposing the MIDI notes up and down so that different allophones get triggered. It's a little bit unstable- the same MIDI notes don't always seem to trigger the same allophone, and there's definitely something weird going on with the release envelope, but overall I'm quite pleased. I'm starting to realise how time-consuming it would be to actually have it speak or sing intelligible words, though. I've added a little reverb from NI Guitar Rig's Reflektor, and a little pingpong delay. I've also applied a gate to the input to squash the background noise. It's much less noisy now I've added the lowpass filter, but it's definitely still not hi-fi! a|x
×
×
  • Create New...