ThinkC How to measure time taken to execute a task?

Relating to ThinkC Development

pfuentes69

Active Tinkerer
Oct 27, 2021
380
293
63
Switzerland
Hey,
I want to take some measures of performance, and I have issues to check the time taken between two steps.
This is the code I'm testing:
Code:
#include <stdio.h>
#include <time.h>

void waitforkey(void)
{
    while(1)
        if (getchar())
            break;
}

main()
{
    clock_t t, tt;
   
    t = clock();
    printf("Press enter...");
    waitforkey();
    tt = clock();
    printf("Starting time: %d\n", t);
    printf("End time: %d\n", tt);
    printf("It took %d cycles\n", tt - t);
    return 0;
}

I get weird results... what am I doing wrong?
 

pfuentes69

Active Tinkerer
Oct 27, 2021
380
293
63
Switzerland
Not really understanding....
If I do this....
Code:
#include <stdio.h>
#include <time.h>

void waitforkey(void)
{
    while(1)
        if (getchar())
            break;
}

main()
{
    int t, tt;
   
    t = clock();
    printf("Press enter...");
    waitforkey();
    tt = clock();
    printf("Starting time: %d\n", t);
    printf("End time: %d\n", tt);
    printf("It took %d cycles\n", tt - t);
    printf("It took %.3f seconds\n", (float)(tt - t) / (float)CLOCKS_PER_SEC) ;
    return 0;
}

I get this output (measuring 30 seconds with my watch)
Code:
Press enter...
Starting time: -3480
End time: -1652
It took 1828 cycles
It took 30.467 seconds
So the measure is good, but the negative values don't make sense to me. I tried to cast to "unsigned long", but then it's worse...

Some enlightening would be appreciated!
 

Crutch

Tinkerer
Jul 10, 2022
293
228
43
Chicago
clock_t is defined as unsigned long in time.h, but (pretty sure this is the issue) you are telling printf to display it as a signed short (‘%d’).

Try changing the printf format specifier to ‘%lu’
 
  • Like
Reactions: pfuentes69

pfuentes69

Active Tinkerer
Oct 27, 2021
380
293
63
Switzerland
Thanks. That made the trick.

Now I have another issue, as I'm trying to write the result in my project using DrawString, and for that I'm converting the float to string using sprintf, but I'm getting a "NAN" result... I need to see about it...
 

pfuentes69

Active Tinkerer
Oct 27, 2021
380
293
63
Switzerland
So, it's getting weird... this sample code works always fine now, but using it in my project, it fails if I set the compiler to use the FPU.
For example:
Code:
char buf[50];
float fff = 12.345f;
sprtint(buf, "#.3f", fff);
DrawString(c2pstr(buf));
This will work fine if FPU is not used, but it will print "0.000" if I enable it in the compiler.
 

Mu0n

Active Tinkerer
Oct 29, 2021
609
560
93
Quebec
www.youtube.com
There are some Macintosh time management functions that are ideal to use here.

TickCount will give you 1/60th resolution timing - not perfect, but works in a pinch, especially if you suspect that your operations are relatively longer than the screen refresh rate. If you need to test bench short operations, just do 'em 100x or 1000x times and divide appropriately.

But the better, more precise solution is to go for InsTime and do some book checking with PrimeTime at the start, then RmvTime when you're ready at the end.

See more here

1664202422658.png
 
Last edited:
  • Like
Reactions: pfuentes69

Crutch

Tinkerer
Jul 10, 2022
293
228
43
Chicago
Right.

@pfuentes69 : the clock() function in time.h is just calling TickCount(), FYI (THINK C includes all sources for the standard library so you can check this yourself in time.c: the definition is literally “clock_t clock(void) {return TickCount();}”). So you would get the same result calling TickCount().

Your issue is in the floating point conversion, I’m not sure what’s going on there.
 
Last edited:

pfuentes69

Active Tinkerer
Oct 27, 2021
380
293
63
Switzerland
Your issue is in the floating point conversion, I’m not sure what’s going on there.
I saw another funky behaviour related to the FPU...

For example, I was using this code to calculate the colour of a shape:
Code:
// Calculate color
// dpLight is a float with values from 0.0f to 1.0f
// colModel is the base colour
RGBColor colPix;
colPix.red = (int)((float)colModel.red * dpLight);
colPix.green = (int)((float)colModel.green * dpLight);
colPix.blue = (int)((float)colModel.blue * dpLight);
This works fine if FPU is disabled, but with FPU I was getting just white and black colours, so the values where either 0x0000 or 0xffff
To make it work with FPU (and non-FPU) I had to modify it to this:
Code:
// Calculate color
// dpLight is a float with values from 0.0f to 1.0f
// colModel is the base colour
RGBColor colPix;
colPix.red = (long)(colModel.red * dpLight);
colPix.green = (long)(colModel.green * dpLight);
colPix.blue = (long)(colModel.blue * dpLight);
It's clear than the second version is better, but I can't understand why the first version fails with FPU.
And I saw it before in another program I did to display fractals... it's so weird...
 

Crutch

Tinkerer
Jul 10, 2022
293
228
43
Chicago
Are you using 16-bit ints? If you are using 16-bit ints, this will easily overflow:

colPix.red = (int)((float)colModel.red * dpLight);

since colModel.red is an unsigned int (if more than 50% pure red, > 32767) which, when casting to a float, is still > 32767. If you are then (say) multiply by 1.0 then casting that float to a signed 16-bit int you could get back 0xFFFF due to overflow. (It’s possible that, without the FPU, the overflow is not detected, the values “wrap around” and you get something visible.)

I think this should perhaps work:

colPix.red = (unsigned int)((float)colModel.red * dpLight);
 
  • Like
Reactions: pfuentes69