I am trying to identify if my resource is failing due to timeouts and at the same time want to analyze how many failures are with timeouts. So, I am trying to attach one timeout with request which will simply log saying request not recieved within time, without throwing error and another timeout will throw the error. Something like this:
const checkResponseTimeouts = async () => {
const data = await firstValueFrom(
from(getData())
.pipe(
timeout({
each: 2500,
with: ( sub ) => { // this is not the way it works, but just to make example
console.log("response not recieved within 2.5 seconds");
return true;
},
})
)
.pipe(
timeout({
each: 10000,
with: () => {
console.log("request timed out after 10 seconds");
return throwError(() => new Error("Some error"));
},
})
)
);
return data;
};
The code above is not correct, its just for demonstrating the problem. I am new to RxJS and few things seems confusing to me. Help will be much appreciated.
You could create a custom operator for this, that would be generic and therefore reusable.
We create a
racebetween the original observable and one that we create ourselves. The one we create ourselves never emits, which ensures that theracecan only be won by the original observable. But within our observable, we call a callback after a certain amount of time. Most importantly, we cleanup thetimeoutif the observable is closed, which could only happen if the original observable emits first or throw.Then to use it:
If we apply a short delay, we get this output:
If we apply a delay that is above the
2500ms timeout for the side effect but below the5000timeout to throw, we get this output:And if we apply a delay that is greater than the
5000ms timeout, we get this output:Here's a live demo on Stackblitz that you can play with.