• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle





  • Second one was, IIRC, one of the primary motivating use cases for IAsyncEnumerable<T>. It has to be IAsyncEnumerable<T> all the way up and down, but it’s elegant enough. Quite often, depending on the API, it might naturally be implemented by a variant of “paging” behind the scenes. One advantage here is that since the updates throughout the final await foreach can happen entirely on the UI thread (assuming a compliant SynchronizationContext.Current, which is the case at least for WPF), you can see the results streaming in one-by-one as they arrive from the original repository if you want.

    Before watching the video, from the title, I correctly guessed what the first bug would be, but my guess for what bug #2 would be was one that’s more related to the far more insidious one that he describes starting at 17:27. You don’t have to get very fancy to see connections held open too long:

    I don’t know how common it is more broadly, but I’ve seen plenty of code that overlooks using and just directly calls Dispose (or, more commonly, a Close method). While it’s generally advised to favor using over directly calling Dispose, admittedly it’s not always a huge deal: if you don’t otherwise have a robust story for what happens when exceptions get thrown, then it doesn’t ordinarily matter that you’re not explicitly cleaning up those resources, since the program might just be ending anyway.

    With iterators, this can leave the connection open even if no exception is thrown. try/finally blocks (including using scopes, which I assume still get lowered to the same) get extra treatment to make sure that the finally part gets executed when the iterator is disposed, which happens at the end of any foreach loop over it (including the implicit loop behind calls like .First).








  • These are fun rabbit holes to go down. Everything here is true, of course: Big-O complexity isn’t everything, context always matters, and measurements trump guesses.

    But also, how many times have you encountered a performance problem with a slow O(n) solution that you solved by turning it into a fast O(n²) solution, compared to the other way around? The difference between 721ns and 72.1ns is almost always irrelevant (and is irrelevant if it’s not on a hot path), and in all likelihood, the same can be said at n=500 (even 500x these numbers still doesn’t even reach 0.5ms).

    So unless context tells me that I have a good reason to think otherwise, I’m writing the one that uses a hash-based collection. As the codebase evolves in the future and the same bits of code are used in novel situations, I am much less likely to regret leaving microseconds on the table at small input sizes than to regret leaving milliseconds or seconds on the table at large input sizes.

    As a trained practicioner of “the deeper magics” myself, I feel the need to point out that there’s a reason why we call these types of things “the deeper magics”, and that’s because heuristics like “better Big-O means better performance” generally point you in the right direction when it matters, and the wrong direction when it doesn’t matter.