Why isn't the timer running when I enter 1000 milliseconds as the interval?

98 views Asked by At

I am working on a timer that gives timestamps for every interrupt at any amount of milliseconds the user wants it to shoot off at.

struct itimerspec takes seconds and nanoseconds when using:

it_interval (a member of the struct)

I want to let the user specify the milliseconds, so I am doing a conversion in the constructor:

mNanoseconds = milliseconds * 1000000 (1,000,000 nanoseconds are in a single millisecond).

When the user enters anything under 1000 milliseconds, the timer will operate normally, but when I use 1000 milliseconds for the interval, the timer doesn't operate at all. It just sits there. I am unsure if my conversion is the issue, or what it could be.

mNanoseconds is a uint64_t, so I don't think the size of the mNanoseconds integer is the issue.

Timer.cxx

std::mutex g_thread_mutex;
Timer::Timer(){};

Timer::Timer(int milliseconds)
    : mMilliseconds(milliseconds)
{
    mNanoseconds = milliseconds * 1000000;
    std::cout << mNanoseconds << " Nanoseconds\n";
}

std::string Timer::getTime()
{
    std::time_t result = std::time(nullptr);
    return (std::asctime(std::localtime(&result)));
}

void Timer::startIntervalTimer()
{    
    struct itimerspec itime;
    struct timeval tv;
    int count = 0;
    tv.tv_sec = 0; // seconds 
    tv.tv_usec = 0; // microseconds
  
    gettimeofday(&tv, NULL);

    //itime.it_interval.tv_sec = 2; //it_interval (value between successive timer expirations)
    itime.it_interval.tv_nsec = mNanoseconds;
    itime.it_value.tv_sec = itime.it_interval.tv_nsec;

    int fd = timerfd_create(CLOCK_REALTIME, 0 );
    timerfd_settime(fd, TFD_TIMER_ABSTIME, &itime, NULL);
    while(count != 10)
    {
        uint64_t exp;
        int n = read(fd, &exp, sizeof(exp));

        //We don't lock the read(), but lock the actions we take when the read expires.
        //There is a delay here- so not sure what that means for time accuracy
        //Started to look into atomic locking, but not sure if that works here
        g_thread_mutex.lock();
        std::string t = getTime();
        std::cout << t << "  fd = " << fd << "  count # " << count << std::endl;
        g_thread_mutex.unlock();

        count++;
    }  
    stopTimer(fd, itime, tv);
}
2

There are 2 answers

1
Kevin On

Your nanosecond value is out of range. From the timerfd_settime man page:

timerfd_settime() can also fail with the following errors:

EINVAL

new_value is not properly initialized (one of the tv_nsec falls outside the range zero to 999,999,999).

You're setting your time up wrong:

itime.it_interval.tv_nsec = mNanoseconds;

Should be

itime.it_interval.tv_nsec = mNanoseconds % 1000000000;
itime.it_interval.tv_sec = mNanoseconds / 1000000000;

And make sure to check the return value of timerfd_settime.

Also, I'm not sure what you meant by

itime.it_value.tv_sec = itime.it_interval.tv_nsec;

Based on my reading from the man page, you're setting the initial expiration time in seconds to the nanoseconds value, which makes no sense. This should probably be something based on the current time. You also need to 0 out the rest of the fields on itime, otherwise the rest of the struct is filled with garbage data.

0
Rakesh Mehta On

The possible reason that I can think of is:

milliseconds * 1000000 will be stored in a temp object of type int only, hence possible overflowed.

There is no sense of making milliseconds as int type here. You can typecast milliseconds to uint64_t:

static_cast<uint64_t>(milliseconds) * 1000000

The other solution is to change

nseconds = milliseconds;
nSeconds *= 100000;