Jump to content

Multidimensional arrays


stryd_one
 Share

Recommended Posts

Hey MIDIBoxers,

Forgive me if this is a silly question or already answered but... Can we use multidimensional arrays? as in:

<code>

my_array [8] [8] [16]

</code>

Would be really handy for a sequencer like this...

<code>

Step_Param_Value [Track] [step] [Param]

</code>

I must admit, once you get the hang of it, C can be so nice and pretty to work with.... If something like this could work it would be awesome.

But then it raises the question - is C the best tool for the job? I think TK mentioned something about SDCC not performing well with arrays?

Link to comment
Share on other sites

hi stryde

just so you know, i have not programmed for midibox at all

but i do know C

what i am suggesting assumes the c compiler used with midibox will recognise the syntax

with a 1 dimensional array, each element of the array is a value of the arrays type.

when you use a 2 dimensional array, the first element is a pointer to the 1 dimensional array represented by the second element.

ie:

    a 1 demension array of five elements:

            array1[] = { 0, 1, 2, 3, 4}

    a 2 demension array of 3 by 5 elements:

            array1[] = { pointer to array2, pointer to array3, pointer to array4}

            array2[] = { 0, 1, 2, 3, 4}

            array3[] = { 1, 2, 3, 4, 5}

            array4[] = { 2, 3, 4, 5, 6}

    so accessing array1[2][1] = 3

          (remember C starts counting from 0 therefore we are looking for array1[3rd element][2nd element])

now extending the above, there is nothing limiting the 2nd level arrays (array2,array3,array4) to having the same size.  these 2nd level array may be different sizes, but doing so will make the code more difficult to administer and become more bug prone as there are no checks in c to ensure you are not referencing beyond the arrays length.

this information can be confirmed in any reasonable c manual or teaching textbook and i would recomend looking the subject up as i haven't touched the area of declaring arrays, i prefer c++ or java so i can avoid malloc and simplify the process with new.

as a said before, i have not programmed for midibox before.  if the information above dosn't help, it may be that the compiler cannot handle multiple dimension arrays (which i would find disturbing and asking what else it dosn't support).

OrganGrinder

Link to comment
Share on other sites

Thanks for that OG :)

I just realised that my post got kinda screwed up...

It was meant to look like this:

my_array [x] [y] [z]

But I suspect that the theory is similar, the first (3-D) array points to the second (2-D array), points to the third (normal) array.

So the questions now are, does SDCC compile multidimentional arrays, and would the performance be effected?

I'll find out the answer to the first q easily, but I have no idea about the other?

Link to comment
Share on other sites

So the questions now are, does SDCC compile multidimentional arrays, and would the performance be effected?

i cannot answer the first part at this time.  but the second part is that the extra look up does have an effect on performance.

let me explain

btw the times are only approximates from by experience with 80x86 assembly, if someone knows the actual times, it would be appreciated.

for an ordinary variable say an int, the lookup would be 1 cycle, just recalling the value stored by the variable (at a specified memory location).

for an one dimensional array, the lookup would be about 3-4 cycles

    lookup the pointer to the array, add the offset, and retrieve the value

for a two dimensional array, the lookup would be about 6-8 cycles

    lookup the pointer to top level array, add the offset, retrieve pointer to second level array, add 2nd level offset, retrieve the data.

for a three dimensional array, the lookup would be about 9-12 cycles

    lookup the pointer to top level array, add the offset, retrieve pointer to 2nd level array, add 2nd level offset, retrieve pointer to 3rd level array, add the offset, retrieve the data.

as you can see, as you go deeping into a multidimensional array, the cost can become quite large if it is done frequently.  this can be offset with the use of temporary variables, particularly in loops (which is where the use of arrays if often done).

other advantages of multidimensional arrays include the simplification of code (i will not say easier to debug because any problems avoided by keeping related data in the multidimention array is offset by different potential bugs).

lets have a look a the difference with some code:

for demonstation purposes, i will show some differences between 1 and 2 dimensional arrays:

1 dimension:

int do[] = {1,2,3,4,5};

int rei[] = {2,3,4,5,6};

int me[] = {0,9,8,7,6};

int i,j;

for (i = 0; i < 5; i++)

{

    for (j = 0; i < 5; j++)

    {

          do = i + j;

          rei = i + j;

          me = i + j;

      }

}

2 dimensions:

int do[] = {1,2,3,4,5};

int rei[] = {2,3,4,5,6};

int me[] = {0,9,8,7,6};

int* scale[3];

scale[0] = do;

scale[1] = rei;

scale[2] = me;

int i,j,.k;

for (i = 0; i < 5; i++)

{

    for (j = 0; i < 5; j++)

    {

          for (k = 0; k < 3; k++)

          {

              scale[k][j] = i + j;

          }

      }

}

now both examples do the same task, but if you extend the number of array elements to 20, 30 or more, the first example would become large, unwieldy and bugprone as you create more arraynames and need to remember which to include in your calculations and in what order.  on the other hand, the second example would only need a small amount of modification and continue to do its task.

just remember that with the 18F452, every assembly opperation takes 1 clock cycle except branches (jumps) which take 2 cycles.  as you can see in the examples, the first example can become unwieldy if its size increases.  but the second example with slow down as more lookups and branches occur (each iteration of a loop requires a branch function in assembly).

additionally you will also need to consider the speed of the processor, the 18F452 is rated at 10 MIPS (million instructions per second) so it if don't go overboard in creating large arrays and looping through them with abondon, the overall effect would be negligable.

personally, i would first get my code to work correctly using deep nested arrays if necessary.  then if needed, i would reexamine the code and introduce optimisations such as temporary variables and unwinding array nests in order to get the code faster.  it is quite possible to double the size of your code and only improve the program speed by 5-10 percent.

just remember when optimising your code - always keep backups of your code!!! that way you can discard optimisations that don't work - never hope you can undo the changes you make when you find the changes don't work.

I know this was a bit verbose but i wanted to provide some of the reasons to use  multidimensional arrays and some of the pitfalls.

OrganGrinder

Link to comment
Share on other sites

hi stryde

i had a quick look at that book you posted in another thread

I've been reading this online book and thought that it might be handy to other C n00bs like myself Wink

It describes C in a fair amount of detail which is great, but the thing that I find makes it really special is that it is also very easy on those not familiar with C. I suspect it'd be good if you have no programming experience at all too.

http://publications.gbdirect.co.uk/c_book/

i only had a quick look, but it seems as good as anything from a bookstore.

if any of my above explainations are unclear, look it up in the book (ch 5).  they had on payrole editors, proofreaders etc so it should be easily read.

if you have any questions, post away, there are always people willing to help (this from a midibox newbe to a midibox guru ... lol)

OrganGrinder

Link to comment
Share on other sites

you may have to pay for pointers by having to include a SDCC lib, which may blow up your code. If it can be avoided, I would avoid it.

Unfortunately I have to rewrite my whole sensorizer class, because the data handling for bankstick storage got too complex. I oversaw this issue. I would prefer working with data that can be easily read from and written to a bankstick.

(but I'm no C-Guru, remeber ;) )

Cheers, Michael

Link to comment
Share on other sites

You can always do it on your own using pointer math if the compiler does really gross things or doesn't support it natively.

For example, lets say you want a 4x5x6 matrix, each entry being a byte (with indices named x, y, z respectively).  So you declare:

unsigned char mem[3*4*5];

now you write a little calling routine called calculate_offset or something, that, given values x, y, z, it returns the element in the linear data structure:

unsigned char calculate_offset(unsigned int x, unsigned int y, unsigned int z)

{

  return ((x*20) + (y*5) + z);

}

now you access members of the 3D array like so:

mem[calc_offset(2,3,2)] = 5; // assigns the value 5 to the element at position (2,3,2)

I probably made a mistake in the code somewhere, I'm not really paying full attention as I'm about to go out for the evening.

Cheers,

Tom

Link to comment
Share on other sites

hi tom

i agree that you can make valid code with the use of pointer math.  but now days it is considered bad programming style.  there are even books on learn c which do not cover pointer mathematics.

the big problem with c/c++ is that there are no checks on memory access.  what can happen is that a pointer can be assigned to any location in memory, the type associated with the pointer is only used in determining the size of the memory block that is being accessed.  what happens with arrays is that by using a subscript that is too large, or the use of pointer mathematics, is that a part of memory outside of the array could be accessed.  the big problem with this is that when accessing memory outside of an array is normally a runtime error and is not picked up by the compiler, and may only be detected through observing "strange" results.

for example

int ar1[] = {1,2,3,4};

int p;

int ic;

p = 5;

for (ic  = 0; ic<= 4; ic++)

{

        ar1[ic] = ar[ic] + 1;

}

printf ("%d", p);

now in most compilers, memory allocation is performed sequentially so in this case ar1 would have 4 int sized blocks which is immediately followed by an int block for p and then an int block for ic.  now as you can see from the code, the loop iterates five times (i=0,1,2,3,4) but the 5th time ar[ic] will actually access the memory location of p. so the value of p has been altered without even an explicit call to the value of p!!!!

btw tom

unsigned char mem[3*4*5];

unsigned char calculate_offset(unsigned int x, unsigned int y, unsigned int z)

{

  return ((x*20) + (y*5) + z);

}

doing something like this is a bad idea.

FIRST, you are mixing explicit values with variables.  what i mean is: what will happen if you changed the declaration to say: unsigned char mem[3*4*6].  now if you go through the code and change every occurance of x*20 with relation to mem, you progam may still work, but if you miss just one it will cause a bug which would be difficult, if not impossible to find.

instead if you have to do it, try this:

#define memX 3

#define memY 4

#define memZ 5

unsigned char mem[memX * memY * memZ];

unsigned char calculate_offset(unsigned int x, unsigned int y, unsigned int z)

{

  return ((x*memY*memZ) + (y*memZ) + z);

}

what happens here is that the precompiler will scan through your code and substitute the numerical values for the macro names (this is now c does constants).  but in addition, you only need to alter one line and it will be reflected throughout your code provided you have consistently used the macro names.  additionally, it will make it easier to read the code as you are reading labels which are associated with what they represent rather than some arbitrary number.

SECOND, the pic is not a pentium 4.  what i am getting at is that the modern pentium chips are capable of performing a integer muliply in a single clock cycle, the pic takes several cycles.  therefore, the frequent use of multiply and divide will slow your code down, the only i can think of is to rework you algorithms, but in the above case, i don't think that is possible without doing some niftly stuff which will take longer than applying the multiplies.

THIRD, see my comments on pointer mathematics - especially if you try to access beyond your declared array sizes

oh - just reread your post, it looks like you are not advercating the use of pointer math, just offering it as an alternative if all else fails.  sorry about hitting so hard, but at least i have out there why pointer math should be avoided.

OrganGrinder

Link to comment
Share on other sites

This is really good stuff...and hey OG, I like verbose :) Heheheh you guys keep posting new stuff before I can post my reply...This is my third attempt  ;D

It does make me wonder the best way to go about storing the variables for a sequence though....

The pointer math method seems more like the ASM way of doing things, but obviously isn't highly recommended in C ;) ... Multidimensional arrays seem CPU and possibly memory hungry.

I know that there would be a fixed number of steps per track (16) and a fixed number of parameters per step (let's say 4). I was thinking that a structure and a one dimensional array could work if you had something like this:

step[n].Param1

step[n].Param2

step[n].Param3

step[n].Param4

But that would leave me with a maximum of 256 entries which means only 16 tracks of 16 steps (not enough)

With all the knowledge going on in this thread I thought I would be silly not to ask for more of your advice, I hope you don't mind.

Link to comment
Share on other sites

Hi stryde

I know that there would be a fixed number of steps per track (16) and a fixed number of parameters per step (let's say 4). I was thinking that a structure and a one dimensional array could work if you had something like this:

step[n].Param1

step[n].Param2

step[n].Param3

step[n].Param4

But that would leave me with a maximum of 256 entries which means only 16 tracks of 16 steps (not enough)

well, if you already know structs, then you might be ready for advanced data structures.

what i would suggest is to look up linked lists, as they have a very small maintenance cost, and very low memory requirements.  if you look linked lists up, you may come across vectors (an implementation of an array with a variable size) - do not use a vector structure, the pic has a small amount of memory and we are trying to use a real time operating system, vectors are unsuitable for a midibox environment!!!

you should be able to find decent tutorials about implimenting linked lists over the net, otherwise an examination of learning C books should get the information needed.

i am not prepared to describe how to impliment a linked list at this time (i will have to draft and proof read it first).

it may be possible that linked lists are implimented in SDCC lib, but i don't know. (thanks audiocommander for reminding me of that library)

a few key points about linked lists include:

  • a linked list is a sequential data structure which can take a variable number of elements (any number between 0 and as many as your memory can take).
  • the memory requirement of a linked list is the data being stored plus 1 pointer per data element - this works out to be about the same as for an array
  • due to the structure of a linked list, random access is rendered very inefficient, but sequential access is very fast.

just so you know, i learned about linked lists (as well as other data structures) in 2nd semester university (we had to know how to programm first!).  these data structures (including linked lists) can be confusing to some people and some care needs to taken into their implimentation and use.

this should make you sufficiently curious and scared, so i will leave to draft my description on making linked lists.

OrganGrinder

Link to comment
Share on other sites

So I had a look around and I think this PDF explains it to a fair extent: http://cslibrary.stanford.edu/103/ ?

I see what you mean about the sequential vs random access... I think this method is well suited to a traditional sequencer because of this, and also because it seems simple to add records into the list, but I think it probably wouldn't work too well in my case because the way my seq works, random read and writes would be pretty common, as it's not necessarily linear in the way it plays...Also, the sequence will have a fixed structure, all the steps of all the tracks should be available at all times, and just filled with 0's if they're not in use, so there wouldn't be a need to add or remove records from the list.

While I think of it, I should mention that I might need to treat some of the data as bitfields. I don't know if that would effect things or not.

Just when I thought it was going to be easy huh? :-\

Link to comment
Share on other sites

Hi!

As we pointed out in the other thread array with more than 256 elements are a bad idea because they will not be compiled (linked) directly with SDCC and the linker.

unsigned char myArray[256]; // will work

unsigned char myArray[257]; // will not be linked!

And the same problem is with multidimensional array:

unsigned char myArray[64][4]; // will work

unsigned char myArray[64][5]; // will not be linked!

So it would be a good idea to use multiple one dimensional arrays.

//instead of myArray[2][4][256];

unsigned char myArray00[256];

unsigned char myArray01[256];

unsigned char myArray02[256];

unsigned char myArray03[256];

unsigned char myArray10[256];

unsigned char myArray11[256];

unsigned char myArray12[256];

unsigned char myArray13[256];

if it is possible.

There is a big difference if you use constants or if you calculate the indices during runtime.

I tried out:

----------- version one ----------

unsigned char otherArray[2]; //to prevent contants that will be optimized

unsigned char myArray[3][4];

....

myArray[otherArray[0]][otherArray[1]];

---------------------------------

----------- version two ----------

unsigned char otherArray[2]; //to prevent contants that will be optimized

unsigned char myArray[12];

....

myArray[otherArray[0]*3 + otherArray[1]];

---------------------------------

----------- version three ----------

unsigned char myArray[3][4];

....

myArray[1][2];

---------------------------------

Version two is produces only 2/3 of the assembler statements of version one.

Version three (with constants) will be optimized to myArray[5];

Link to comment
Share on other sites

Thanks Thomas. Maybe that's what TK meant when he spoke of the poor performance with arrays... although adding an extra dimension to the array I guess it makes sense that the code size increases, but now we know exactly why it's best avoided....

Link to comment
Share on other sites

hi stryde

if you want fast random access, consider a tree structure

for what you are describing, the use of a sorted tree would be of benefit.

it keeps the linked lists nature of allowing for variable amounts of data

and works out to be a little slower than arrays.  eg a linked list searching amount 1000 data records would take on average 500 lookups, a tree of the same size can be as good as 9 lookups on average (there are ways to keep this consistant).

the problem is that trees are not for the faint hearted, they can be a pain to administer.  as i said above if the tree is sorted, it makes it easier to search.

additionally, if you balance the tree (this is where it can be real mind blowing, but it is not that hard to program), it will keep search times down (in theory, a tree in the worst case sinerio is essentially a linked list, but i suggest reading about trees to understand what i mean and why it happens).

just as a side note, trees are commonly used in games and other virtual environments (read situation which requires extremely fast access and administration speeds) because of their flexibilty to cater to dynamic situations, have reasonable random search times and it is possible to keep associated data close together in the tree.  but bear in mind that recursive algorithms are often used in these applications. (if anyone has a complaint about recursive algorithms, just remember that there are situations where recursion is more efficient than iteration and this is not the right post for a dispute on this matter.)

OrganGrinder

Link to comment
Share on other sites

Thanks again OG :) I really appreciate all this help.

Funny, I saw binary trees yesterday on that Stanford site too: http://cslibrary.stanford.edu/110/ and thought that they sounded like they might be useful. I'll have a thorough read on it tonight.

I also read about doubly-linked lists, with a pointer to the previous node as well as the next, do you know if that be suitable?

Link to comment
Share on other sites

Hi!

Some additional words:

1. since array can have only <=256 elements a one dimentional array is better because the ("pointer") arithmetic can be calculated in 8bit-Integers ("unsigned char"). But every multiplication has a 16bit-result, so theres overhead to handle those 16bit-values in more steps.

And with two dimentional arrays the whole calculation is done with 16bit.

2. I'm not shure but I can imagine that higher leve data structures like links lists or binary trees have more overhead than simply access arrays with <=256elements directly. Even if you have to calculate the address.

Each element of a linked list needs at least 2bytes (data and pointer).

But sure: it depends on your needs. Maybe a search could be improved. But you pay with RAM if you want more speed.

3. Sometimes its better to replace mulitplication by addition:

--------------version 1---------------

unsigned char result = value1 * value2;

--------------------------------------

vs.

--------------version 2----------------

unsigned char result = 0;

unsigned char i;

for (i = 0; i < value1; i++) result += value2;

--------------------------------------

While version 2 looks more complicated it produces less assembler code!

If value1 is small (e.g. 2) it's even faster. 

4. Always look at te *.asm out if you not sure.

Link to comment
Share on other sites

hi thomas

your point 1 makes sense about creating overhead to perform pointer arithmatic.

in point 2, stryde is inquiring about a the use of multiplie dimension arrays, and i brought forward the idea of used an advanced data structures.  firstly, because the number of elements is unknown, and from what i was able to gather from his earlier posts, a means which did not require random access.  the conclusion i came up with was linked lists because they meet the needs that were mentioned and are fairly easy to use once you know what is happening.

now i acknowledge that a linked list has a small memory overhead (1 pointer in addition to the data), also binary trees and two way linked lists have a larger overhead (2 pointers in addition to the data).  but stryde has mentioned that the data is at least 16 bytes in size, so this overhead is fairly low.  of course not as low as an array (1 pointer total plus the data).

in regard to processing overhead, stryde has said that his project will require random searching, and editing of data, but minimal adding and removing data elements.  as mentioned earlier, linked lists are hopeless with random searches, but search a sorted and balanced binary tree is faster than searching a sorted array (btree just follows the pointers, while an array has to calculate where to search next).  the big difference is that the binary tree does the sorting when the data element is inserted into the binary tree which has some overhead, and possibly some overhead if the tree needs to be rebalanced if an element is added or removed.

point 3, about the difference between the efficiency of a multiply function as opposed to an adding loop.  care needs to be taken, firstly occurding to microchip all assembly commands take 1 cycle except branches which take 2.  now your loop has overhead in creating the loop and the branch with each iteration.  any idea how this compares to how it is done on the chip?  maybe the pic does exactly the same thing, or maybe something else...  a comparison would be useful, i presume the pic's method works reasonably well for all potential values.  but if you can guarentee that you multipying values are small, and you code is faster, go ahead and use it - its a good idea if you need to squeeze a little more speed out of your code, but i suggest getting the code to work first before optimizing. (and document what you are doing so people reading it will understand why)

point 4, what are you getting at here?  people who are new to programming are overwelmed by c, trying to keeping everything together and doing what it is supposed to do.  making them read the assembly derived from their code (problably without documentation) at best will scare them, or even drive them away.  let beginner programmers write their code, get it to work and be confident about it.  then if they wish, they can examine what their code means to the computer (assembly is only 1/2 step above the actual machine code, C is at the top of the staircase).

after all this, lets not forget way we are posting on this topic - stryde needs some help in organising the data for his program.

from previous posts, i believe stryde requires a data structure where most changes to the data structure itself is at the beginning of the program, and then the structure is relatively stable requiring random searches and edits of the data elements themselves (this does not effect the data structure), and finally removal of the data structure when the program is finished.  additionally it seems that stryde would like to go beyond the 256 element limit which appears to affect arrays with the c compiler being used for the pic (it is an 8 bit processor afterall).

thanks for your input, its got me to revisit and think about issues i havn't visited in a while.

OrganGrinder

Link to comment
Share on other sites

While I think of it, I should mention that I might need to treat some of the data as bitfields. I don't know if that would effect things or not.

Due to a restriction to SDCC, bitfields are limited to 8 bits!  :-\

If you're interested in my personal opinion: forget about "nice, cleaned-up" variable access that requires nested pointer structures, multiplication operators for index access, arrays with more than 256 entries or other nasty things.

Can't you split it up somehow in smaller data-chunks to avoid these huuuuge arrays?

(eg 1 array for each bar?)

:)

(And besides: do you plan to store these arrays/data somewhen on a bankstick?)

Link to comment
Share on other sites

it keeps the linked lists nature of allowing for variable amounts of data

and works out to be a little slower than arrays. eg a linked list searching amount 1000 data records would take on average 500 lookups, a tree of the same size can be as good as 9 lookups on average (there are ways to keep this consistant).

Thought I should mention... The amounts of data wouldn't change in my case, because there would be a fixed number of parameters available. Also, I wouldn't need to search for data, but I would need to jump to read or write random parameters.

I feel maybe I should be more specific when the theory gets this technical. The data would be something like this:

Track 1

|

|--Divider

|--Direction

|--Mute

|

|------ Step 1

| |------Note

| |------Velocity

| |------Length

|

|------ Step 2

| |------Note

| |------Velocity

| |------Length

|

...

|------ Step 16

|------Note

|------Velocity

|------Length

Track 2

|

|--Divider

|--Direction

|--Mute

|

|------ Step 1

| |------Note

| |------Velocity

| |------Length

|

|------ Step 2

| |------Note

| |------Velocity

| |------Length

|

...

...

|------ Step 16

|------Note

|------Velocity

|------Length

...

...

|

Track 32

This is just an example of the layout but doesn't contain so many parameters as a full design. There are always 32 tracks * 16 steps * 3 7-bit values available, even if the track is not currently in use, so there are always a fixed number of variables.

Sometimes the seq might read the note,velocity and length from a particular step (when it plays it), sometimes it would write all three (when you program the step) but sometimes it might just write the velocity of a step, or the track divider, sometimes it might write all 16 note values but not touch the velocity on any of the steps....So it would require random access, but not really searches. I'm sorry if I made things confusing, I'm not too good with the terminolgy it seems :-\

This thread is a real education :) Please don't be too concerned if we get a bit side-tracked, I'm sure that I'm not the only one who's finding all this interesting.... But I sure am puzzled about the right way to do this. It's a good thing that I enjoy a challenge ;)

Edit: Ignore that part about 7-bit values, they can be 8-bit. I just read that thing about 8-bit fields. And yes please, I'm interested in everyone's personal opinion :)

Re: 1 array for each bar, I was thinking something like that might be the go. I think it matches the drawing up there too - eg:

Trk1Step[n].NoteNum

.Vel

.Length

Trk2Step[n].NoteNum

.Vel

.Length

Re: The bankstick - I'd like to use them if I can, but if not, it'll be saving dumps to PC/MIDI recorder

Link to comment
Share on other sites

point 4, what are you getting at here?  people who are new to programming are overwelmed by c, trying to keeping everything together and doing what it is supposed to do.  making them read the assembly derived from their code (problably without documentation) at best will scare them, or even drive them away. 

Hi OrganGrinder!

The problem is that we not program on a "normal" CPU but on a very restricted controller. And SDCC is not ANSI-C.

So it is worth to look at assembler programming of PIC (e.g. in german www.sprut.de) and see what's really produced by the compiler. I can be really eye-opened. 

I don't think it "drive them away". It's only another option.

Link to comment
Share on other sites

Re: The bankstick - I'd like to use them if I can, but if not, it'll be saving dumps to PC/MIDI recorder

The point is, that it does not really matter if you dump to MIDI or dump to Bankstick. But you should implement this in a very, very early state, or you'll get the same results like me (= rewriting everything, because the data-concept is crappy!)

What I've read from the "Concept of Bankstick-Storage"-Thread is to set up a data-structure on a bankstick-file, with all the needed address-ranges, so that no dump at all is needed, but just a command to write a specificic section of RAM to Bankstick or transfer that section via midi.

My (too complex, double code size (!) and very slow) implementation has been to dump the data (buffer[32]; buffer[0] = param[1]; buffer[1] = param[2]... and so on...) and read it back in.

I would split up the source-code in different files for steps and tracks.

You should create a typedefinition for the steps. By creating a track, this track allocates its 16 step-variables. Know what I mean?

Therefore you have a quite clear data structure, can easily edit and change and avoid additional code just to access this data.

Best,

Michael

Link to comment
Share on other sites

The point is, that it does not really matter if you dump to MIDI or dump to Bankstick. But you should implement this in a very, very early state

What I've read from the "Concept of Bankstick-Storage"-Thread is to set up a data-structure on a bankstick-file, with all the needed address-ranges, so that no dump at all is needed, but just a command to write a specificic section of RAM to Bankstick or transfer that section via midi.

Thanks mate, I'll take that into account.

This sounds like it would suit something like my original plan, which was the ASM-style way of defining a range of memory for the entire DB (using the above example, it would be 3 parameters x 8 bits=24bits x 16 steps x 32 tracks = 1.5kB) and using pointer math... I'm still stuck on the right way to section this memory off if not using arrays though :\

You should create a typedefinition for the steps. By creating a track, this track allocates its 16 step-variables. Know what I mean?

Therefore you have a quite clear data structure, can easily edit and change and avoid additional code just to access this data.

I think I know what you mean... As in, create the typedef for the step structure and on startup, run an initialisation which builds the database from multiple instances of that type?

Again, guys I have to thank you for the advice. I understand if this is taking too much of your time and I totally understand if you have to tell me to go away and do some more reading :)

Link to comment
Share on other sites

hi tom

Hi OrganGrinder, thanks for the verbose reply.

i agree that you can make valid code with the use of pointer math.  but now days it is considered bad programming style.  there are even books on learn c which do not cover pointer mathematics.

That is to the disadvantage to the end user.  You can often optimize the assembly code if you understand pointer mathematics.  I'm pretty sure that's TK's issue with SDCC - it generates bloated code to calculate array offsets.

the big problem with c/c++ is that there are no checks on memory access.  what can happen is that a pointer can be assigned to any location in memory, the type associated with the pointer is only used in determining the size of the memory block that is being accessed.  what happens with arrays is that by using a subscript that is too large, or the use of pointer mathematics, is that a part of memory outside of the array could be accessed.  the big problem with this is that when accessing memory outside of an array is normally a runtime error and is not picked up by the compiler, and may only be detected through observing "strange" results.

While you are correct, this necessary evil on C/C++ has nothing to do with pointer math as I did in my post.  As you show in your example, any array access doesn't have the bounds checked.  Doing it the way I proposed, in terms of bounds checking, is just as dangerous as making an access via array[6][5][4].

btw tom

doing something like this is a bad idea.

Definitely.  In my defense I wrote this with minutes on the clock since my girlfriend was yelling at me to get off the damn computer since we were going out for the night :)

SECOND, the pic is not a pentium 4.  what i am getting at is that the modern pentium chips are capable of performing a integer muliply in a single clock cycle, the pic takes several cycles.  therefore, the frequent use of multiply and divide will slow your code down, the only i can think of is to rework you algorithms, but in the above case, i don't think that is possible without doing some niftly stuff which will take longer than applying the multiplies.

Here's the thing: if you access memory using array[a][c][d] THE COMPILER CONVERTS THAT TO ASSEMBLY CODE THAT IS EQUIVALENT TO MY EXAMPLE (sans the function call, but we can get rid of that by making the function inline, or turning the function itself into a preprocessor macro). 

THIRD, see my comments on pointer mathematics - especially if you try to access beyond your declared array sizes

Again, any array access doesn't have it's bounds checked, so what you are really saying is that "using arrays in C is bad", and not really anything negative about my specific usage of them, aside from the magic numbers stuffed in my ugly example.

oh - just reread your post, it looks like you are not advercating the use of pointer math, just offering it as an alternative if all else fails.  sorry about hitting so hard, but at least i have out there why pointer math should be avoided.

OrganGrinder

No problem, I enjoy it.  I look forward to your reply.

Cheers,

Tom

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...