Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 11/25/2013 in all areas

  1. Hi there! I'm having trouble using the timers of my STM32-Core, because it keeps hanging and freezing, once I initialise them. Let me show you some code: ///////////////////////////////////////////////////////////////////////////// // This hook is called after startup to initialize the application ///////////////////////////////////////////////////////////////////////////// void APP_Init(void) { MIOS32_DELAY_Init(0); // initialize MIOS32 Timer #0, so that our "key-value readout" function is // called each 1 ms: MIOS32_TIMER_Init(0, 1000, Timer, MIOS32_IRQ_PRIO_LOW); // Init a timer for the UI MIOS32_TIMER_Init(1, 40000, UI, MIOS32_IRQ_PRIO_LOW); MIOS32_SPI_TransferModeInit(0, MIOS32_SPI_MODE_CLK1_PHASE1, MIOS32_SPI_PRESCALER_16); Keys_Init(); // only initialising variables UI_Init(); // same here // initialize all LEDs MIOS32_BOARD_LED_Init(0xffffffff); } My UI timer works fine, so there's no problem. The strang thing is my other timer: //////////////////////////////////////////////////////////////////////// // callback function of the readout timer. //////////////////////////////////////////////////////////////////////// // Data arrives as 144 bytes. Three bytes actually contain only two 12bit // values. At the end, this function melts 1,5 bytes to one 12bit value. // Because the values are also in the wrong order, it also sorts them // so they are in the right oder. It would have been better to // let the slave CPU do this - however the 20MHz 8bit ATMega used as // the slave is already busy reading the values from the ADC chips and // transmitting them to the master. void callback (void) { Data[counter] = byte[0]; if (counter >= 144) { MIOS32_SPI_RC_PinSet(0,0,1); int i; // Bring data in right order for (i=0; i<12; i=i+2) { Values[8*i ] = (Data[i*3/2 + 1]<<4) | ((Data[i*3/2 + 2]&0b11110000)>>4); Values[8*i + 1] = (Data[i*3/2 + 19]<<4) | ((Data[i*3/2 + 20]&0b11110000)>>4); Values[8*i + 2] = (Data[i*3/2 + 37]<<4) | ((Data[i*3/2 + 38]&0b11110000)>>4); Values[8*i + 3] = (Data[i*3/2 + 55]<<4) | ((Data[i*3/2 + 56]&0b11110000)>>4); Values[8*i + 4] = (Data[i*3/2 + 73]<<4) | ((Data[i*3/2 + 74]&0b11110000)>>4); Values[8*i + 5] = (Data[i*3/2 + 91]<<4) | ((Data[i*3/2 + 92]&0b11110000)>>4); Values[8*i + 6] = (Data[i*3/2 + 109]<<4) | ((Data[i*3/2 + 110]&0b11110000)>>4); Values[8*i + 7] = (Data[i*3/2 + 127]<<4) | ((Data[i*3/2 + 128]&0b11110000)>>4); Values[8*(i+1) ] = (Data[i*3/2 + 2]<<8) | Data[i*3/2 + 3]; Values[8*(i+1) + 1] = (Data[i*3/2 + 20]<<8) | Data[i*3/2 + 21]; Values[8*(i+1) + 2] = (Data[i*3/2 + 38]<<8) | Data[i*3/2 + 39]; Values[8*(i+1) + 3] = (Data[i*3/2 + 56]<<8) | Data[i*3/2 + 57]; Values[8*(i+1) + 4] = (Data[i*3/2 + 74]<<8) | Data[i*3/2 + 75]; Values[8*(i+1) + 5] = (Data[i*3/2 + 92]<<8) | Data[i*3/2 + 93]; Values[8*(i+1) + 6] = (Data[i*3/2 + 110]<<8) | Data[i*3/2 + 111]; Values[8*(i+1) + 7] = (Data[i*3/2 + 128]<<8) | Data[i*3/2 + 129]; } MIOS32_SPI_RC_PinSet(0,0,0); } else { counter++; MIOS32_SPI_TransferBlock(0, NULL, byte, 1, &callback); } } //////////////////////////////////////////////////////////////////////// // Readout Timer //////////////////////////////////////////////////////////////////////// // This Timer is called every ms to start reading values from the slave. // It only reads one byte an then calls a callback function to read the // next byte until all bytes are read. // This behaviour introduces small gaps in between the single bytes and // give the slow 20MHz slave CPU some time to refill its sending buffers. void Timer(void) { counter = 0; MIOS32_SPI_TransferBlock(0, NULL, byte, 1, &callback); } What am I trying to do? I'm trying to midify a real piano. For this purpose i have installed an optical sensor under each key. The sensor detects, how deep the key is depressed. I need to scan these every 1ms to get a result, that allows me to calculate a good velocity value. So there are 96 optical sensors, that need to be scanned with at least 1000Hz. I have built me a slave board with an AVR@20MHz to scan 12 8-channel-ADCs and send over the results to the STM32-Core. For the results are 12bit, I need to send them over als 144bytes, where 3 bytes actually contain only two values bit-wise. The slave board is working fine. However, the time frame is very limited. The main problem is, that the AVR is too slow to refill it's sending buffers in between the transmission of two bytes. So i use the above version to introduce small gaps in between two bytes. This is done by the overhead of recalling the DMA after each byte. I don't know, if it's a good solution, but it seemed to work for 40ms delays. However I need 1ms (or even less). So what happens, when I upload this code to the STM32? It freezes. I'd like to know why, because my timer function should be an easy task for this CPU (as it has hardware multiplier and division units). All I do is configuring the DMA 144 times (ok, that's sick, I know) and doing some math at the end. Even if the time between the DMA call and the callback function is very long - there should be some time to serve USB and my UI task, too. So here I am and I have no idea what to do... Maybe you have an idea for me, Thank you! Bääääär PS: I have not been successfull using the RTOS-cpu-measurements. I followed the guide on the doxygen page, but can't compile due to missing functions (?).
    1 point
×
×
  • Create New...