I am currently working on an exercise from the CLRS, here is the problem: 11.2-3 Professor Marley hypothesizes that he can obtain substantial performance gains by modifying the chaining scheme to keep each list in sorted order. How does the pro- fessor’s modification affect the running time for successful searches, unsuccessful searches, insertions, and deletions?
I saw on the internet that the answer is the following:

I do not understand why the result is like this, my answer is that since the linked list are sorted, then we can use a dichotomy to make the search so the excepted search time ( as well as the worst case time ) is θ(log2(a)) ( a being the load factor, n/m, n being the number of keys effectively stored in the table and m it's capacity ).
I am ok that deletion still take θ(1) time ( if lists are double chained ) and I said insertion will now take θ(log2(a)) because you need to determinate the correct place for the element you are adding to the list. Why is this not the correct answer?
A technical point: If you store the buckets as a linked list, then you can't use binary search to look over the items in that linked list in time O(log b), where b is the number of items in the bucket.
But let's suppose that instead of doing this you use dynamic arrays for each bucket. Then you could drop the runtimes down by a log factor, in an asymptotic sense. However, if you were to do that:
You're now using dynamic arrays rather than linked lists for your buckets. If you're storing large elements in your buckets, since most buckets won't be very loaded, the memory overhead of the unused slots in the array will start to add up.
From a practical perspective, you now need some way of comparing the elements you're hashing from lowest to highest. In Theoryland, that's not a problem. In practice, though, this could be a bit of a nuisance.
But more importantly, you might want to ask whether this is worthwhile in the first place. Remember that in a practical hash table the choice of α you'll be using is probably going to be very small (say, α ≤ 5 or something like that). For small α, a binary search might actually be slower than a linear scan, even if in theory for sufficiently large α it's faster.
So generally, you don't see this approach used in practice. If you're looking to speed up a hash table, it's probably better to change hashing strategies (say, use open addressing rather than chaining) or to try to squeeze performance out in other ways.